Fri. Mar 24th, 2023

New imaging applications are booming, from collaborative robots in Industry 4.0, to drones for firefighting or agriculture, to biometric facial recognition, to point-of-care handheld medical devices in the home.

New imaging applications are booming, from collaborative robots in Industry 4.0, to drones for firefighting or agriculture, to biometric facial recognition, to point-of-care handheld medical devices in the home. A key factor in the emergence of these new applications is that embedded vision is more pervasive than ever. Embedded vision is not a new concept; it simply defines a system that includes a vision setup that controls and processes data without an external computer. It has been widely used in industrial quality control, the most familiar example being “smart cameras”.

In recent years, the development of affordable hardware devices for the consumer market has resulted in significantly reduced bill of materials (BOM) costs and product size compared to previous solutions using computers. For example, small system integrators or OEMs can now purchase single-board computers or systems-on-modules such as NVIDIA Jetson in small quantities; while larger OEMs can directly acquire systems such as Qualcomm Snapdragon or Intel Movidius Myriad 2 and other image signal processors. At the software level, commercially available software libraries can speed up the development of specialized vision systems and reduce the difficulty of configuration, even for low-volume production.

The second change driving the development of embedded vision systems is the advent of machine learning, which enables neural networks in the lab to be trained and then uploaded directly to the processor so that it can automatically identify features and make decisions in real time.

Being able to provide solutions for embedded vision systems is critical for imaging companies targeting these high-growth applications. Image sensors play an important role in large-scale introduction because they can directly affect the performance and design of embedded vision systems, and its main driving factors can be summarized as: smaller size, weight, power consumption and cost, which is abbreviated as “” SWaP-C” (decreasing Size, Weight, Power and Cost).

1. Cost reduction is critical

The accelerator for new applications of embedded vision is price to meet market demand, and vision system cost is a major constraint to achieving this requirement.

1.1. Optical cost saving

The first way to reduce the cost of vision modules is to reduce the size of the product for two reasons: first, the smaller the pixel size of the image sensor, the more chips can be made on the wafer; the other hand, the sensor can use smaller and lower cost-effective optical components, both of which can reduce inherent costs. For example, Teledyne e2v’s Emerald 5M sensor reduces pixel size to 2.8μm, allowing S-mount (M12) lenses to be used on 5-megapixel global shutter sensors, resulting in immediate cost savings – the price of an entry-level M12 lens That’s about $10, and larger-sized C-mount or F-mount lenses cost 10 to 20 times as much. So reducing size is an effective way to reduce the cost of embedded vision systems.

For image sensor manufacturers, this reduced optical cost has another design impact, because generally, the lower the optical cost, the less ideal the angle of incidence of the sensor. Therefore, low-cost optics require the design of specific displacement microlenses above the pixels to compensate for distorted and focused light from wide angles.

1.2. Sensor low-cost interface

In addition to optical optimization, the choice of sensor interface also indirectly affects the cost of the vision system. The MIPI CSI-2 interface is the most suitable choice to achieve cost savings (it was originally developed by the MIPI Alliance for the mobile industry). It has been widely adopted by most ISPs and has begun to be adopted in the industrial market as it provides a low-cost System-on-Chip (SOC) or System-on-Module (SOM) from companies such as NXP, Nvidia, Qualcomm or Intel. integrated. Design a CMOS image sensor with MIPI CSI-2 sensor interface, without any intermediate converter bridge, directly transfer the data of the image sensor to the host SOC or SOM of the embedded system, thus saving cost and PCB space, of course, in This advantage is even more pronounced in multi-sensor-based embedded systems such as 360-degree panoramic systems.

These benefits are somewhat limited though, as the MIPI interface is limited to a 20 cm connection distance, which may not be optimal in remote setups where the sensor is far from the host processor. In these configurations, at the expense of miniaturization, a camera board solution that integrates a longer interface is a better choice. Some off-the-shelf solutions can be integrated, for example camera boards from industrial camera manufacturers (eg Flir, AVT, Basler, etc.) are often available in MIPI or USB3 interfaces, the latter being able to reach ranges of more than 3 to 5 meters.

1.3. Reduce development costs

Rising development costs are often a challenge when investing in new products; it can cost millions in one-time development fees and put time-to-market pressures. For embedded vision, this pressure becomes even greater, as modularity (ie, the ability of a product to switch between multiple image sensors) is an important consideration for integrators. Fortunately, by providing some degree of cross-compatibility between sensors, for example, by defining families of components that share the same pixel architecture to have stable optoelectronic performance, by having a common optical center to share a single front-end mechanism, and by being compatible with PCB components to simplify evaluation, integration and supply chain, thereby reducing development costs.

To simplify camera board design (even for multiple sensors), there are two ways to design sensor packages. Pin-to-pin compatibility is the design of choice for camera board designers because it enables multiple sensors to share the same circuitry and controls, making assembly completely independent of the PCB design. Another option is to use size-compatible sensors, so that multiple sensors can be used on the same PCB, but this also means that they may have to deal with differences in the interface and wiring of each sensor.

Image Sensors Drive Development of Embedded Vision Technology
Figure 1: Image sensors can be designed to be pin compatible (left) or size compatible (right) for proprietary PCB layout designs

2. Energy efficiency provides better ability to work alone

Tiny battery-powered devices are the applications that most clearly benefit from embedded vision, as external computers prevent any portable applications from happening. To reduce system power consumption, image sensors now include a variety of features that enable system designers to conserve power.

From a sensor perspective, there are several ways to reduce the power consumption of an embedded vision system without sacrificing the acquisition frame rate. The easiest way to do this is to minimize the dynamic operation of the sensor itself at the system level by using standby or idle mode for as long as possible. The standby mode reduces the power consumption of the sensor to less than 10% of the active mode by turning off the emulation circuit. Idle mode cuts power consumption in half and allows the sensor to restart acquiring images in microseconds.

Another way to integrate energy savings in sensor designs is to use advanced lithography node technology. The smaller the technology node, the smaller the voltage required to switch the transistor, since the power dissipation is proportional to the voltage (??????????????? ∝ ?? × ??2) Can reduce power consumption. So the pixels produced 10 years ago using 180nm technology not only reduced transistors to 110nm, but also reduced the voltage of digital circuits from 1.9 volts to 1.2 volts. The next generation of sensors will use the 65nm technology node, making embedded vision applications more power efficient.

Finally, by choosing the right image sensor, the power consumption of LED lamps can be reduced under certain conditions. There are systems that must use active lighting, such as 3D map generation, motion pauses, or purely using sequential pulses to specify wavelengths to improve contrast. In these cases, reducing the noise of the image sensor in low-light environments can achieve lower power consumption. With reduced sensor noise, engineers can decide to reduce the current density intensity, or reduce the number of LED lights integrated into the embedded vision system. In other cases, when image capture and LED blinking are triggered by external events, choosing an appropriate sensor readout structure can result in significant power savings. With traditional rolling shutter sensors, the LEDs must be fully on when the frame is fully exposed, while global shutter sensors allow the LEDs to be activated only for a portion of the frame. So for intra-pixel correlated double sampling (CDS) applications, replacing rolling shutter sensors with global shutter sensors can save on lighting costs while still maintaining the same low noise as CCD sensors used in microscopes.

3. On-chip functionality paves the way for application-designed vision systems

Some slanted stretch concepts of embedded vision lead us to fully customize the image sensor, integrating all processing functions (system on a chip) in a 3D stack for optimized performance and power consumption. However, the cost of developing this type of product is very high, and a fully custom sensor capable of achieving this level of integration is not entirely impossible in the long run, and now we are in a transitional phase that involves embedding certain functions directly into the sensor to Reduce computational load and speed up processing time.

In barcode reading applications, for example, Teledyne e2v has patented technology that incorporates embedded functionality into the sensor chip that includes a proprietary barcode recognition algorithm that finds the barcode position within each frame, allowing the image signal processor to Simply focus on these ranges to increase data processing efficiency.

Image Sensors Drive Development of Embedded Vision Technology
Figure 2: Teledyne e2v Snappy 5-megapixel chip, automatically identifying barcode position

Another feature that reduces processing load and optimizes “good” data is Teledyne E2V’s patented Fast Exposure Mode, which enables the sensor to automatically correct exposure time to avoid saturation when lighting conditions change. This feature optimizes processing time as it accommodates fluctuations in lighting within a single frame, and this quick response minimizes the number of “bad” images the processor needs to process.

These functions are often specific and require a good understanding of the customer’s application. With sufficient knowledge of the application, a variety of other on-chip functions can be designed to optimize embedded vision systems.

4. Reduced weight size to fit the smallest application space

Another major requirement for embedded vision systems is the ability to fit into tight spaces or to be low in weight for use in handheld devices and/or to extend the operating time of battery-powered products. This is why most embedded vision systems today use low-resolution small optical format sensors of only 1MP to 5MP.
Reducing the size of pixel chips is only the first step in reducing the footprint and weight of image sensors. Today’s 65nm process allows us to reduce the global shutter pixel size to 2.5μm without sacrificing optoelectronic performance. This production process enables, for example, Full HD Global Shutter CMOS image sensors to fit into the mobile phone market requiring less than 1/3 inch.

Another major technique to reduce sensor weight and footprint is to reduce package size. Chip-scale packaging has grown rapidly in the market over the past few years, especially in mobile, automotive electronics and medical applications. Compared with the traditional ceramic (Ceramic Land Grid Array, CLGA) packaging commonly used in the industrial market, chip-scale fan-out packaging can achieve higher density connections, so it is an excellent solution to the lightweight and miniaturization challenge of image sensors in embedded systems. For example, Teledyne e2v’s Emerald 2M image sensor chip scale package has half the side height of a ceramic package and a 30% reduction in size.

Image Sensors Drive Development of Embedded Vision Technology
Figure 3: Comparison of the same chip in a CLGA package (left) and a wafer-level fan-out organic package (right). The latter can reduce footprint, thickness and cost.

Going forward, we expect new technologies to further enable the smaller sensor sizes required for embedded vision systems.
Three-dimensional stacking is an innovative technology for the production of semiconductor devices. Its principle is to manufacture various circuit chips on different wafers, and then use copper-to-copper connections and Through Silicon Vias (TSV) technology for stacking and interconnection . The three-dimensional stacking allows the device to achieve a smaller footprint than conventional sensors because it is a multi-layer overlapping chip. In a three-dimensional stacked sensor, however, the readout and processing chips can be placed below the pixel chips and row decoders. In this way, the footprint of the sensor is reduced due to the shrinking readout and processing chips, and more processing resources can be added to the sensor to reduce the load on the image signal processor.

Image Sensors Drive Development of Embedded Vision Technology
Figure 4: Three-dimensional chip stacking technology enables overlapping combinations of pixel chips, analog and digital circuits, and even additional processing chips for specialized applications, reducing sensor area.

However, there are still some challenges ahead for the 3D stacking technology to be widely used in the image sensor market. First it is an emerging technology, and second it is more expensive because of the additional process steps required, making the chip more than three times more expensive than chips using traditional technology. Because 3D overlay will be primarily the choice for high performance or very small footprint embedded vision systems.

In summary, an embedded vision system can be classified as a “lightweight” vision technology that can be used by different types of enterprises including OEMs, system integrators and standard camera manufacturers. “Embedded” is a general description that can be used in different applications, so it is not possible to make a list of its characteristics. However, there are several applicable laws for optimizing embedded vision systems. Generally speaking, the market driving force is not from super fast speed or super sensitivity, but size, weight, power consumption and cost. The image sensor is a major driver of these conditions, so care needs to be taken to select the right image sensor in order to optimize the overall performance of the embedded vision system. The right image sensor can bring more flexibility to embedded designers, save on bill of materials costs, and reduce the footprint of lighting and optical components. It also allows designers to choose from a wide range of affordable image signal processors with optimized deep learning capabilities from the consumer market without facing additional complexity.

The Links:   NL3224AC35-20 DF75LB160