Tuesday, May 23, 2017

Omnivision Announces HDR Sensors with LED Flicker Mitigation, Surround View ISP

PRNewswire: OmniVision introduces the 1.3MP OX1A10 and 1.7MP OX2A10 for side- and rear-view camera monitoring systems (CMS), respectively. Built on 4.2um BSI split pixel technology for HDR, the new sensors offer LED flicker–reduction.

"In regular HDR cameras, the short exposure time causes the image sensor to miss the LED 'on' pulse, giving the appearance of 'flicker' in the video stream on a display. Merely increasing the exposure time of normal pixel technology to capture the LED pulse does not solve the problem, but rather causes saturation and loss of dynamic range," said Marius Evensen, product marketing manager at OmniVision. "We designed the OX1A10 and OX2A10 image sensors with LED flicker–reduction technology to specifically mitigate this problem and enable mass adoption of e-mirrors in the automotive market. These sensors join our growing portfolio of automotive specific digital imaging solutions targeted at both machine and vision display systems."

The OX1A10 and OX2A10 achieve 110dB HDR while guaranteeing LED pulse capture. The OX1A10 supports 1280 × 1080 resolution in a 1:1.2 aspect ratio for side-view cameras. Targeting rear-view cameras, the OX2A10 supports 1840 × 940 resolution in a 2:1 aspect ratio. The sensors' on-chip combination algorithm reduces the output data rate for easier data transition and back-end processing.

The OX1A10 and OX2A10 are currently in volume production.

PRNewswire: OmniVision announces the OV493, a companion chip with surround-video image-processing capabilities for automotive applications. Each OV493 can process two video streams simultaneously, and two ISP companion chips can process four camera inputs for surround-view applications.

"As advanced automotive driver-assistance features, such as 360-degree surround-view systems, become more popular, automotive manufacturers seek imaging solutions that are suitable for multiple vehicle platforms and can meet stringent industry standards," said Andy Hanvey, senior automotive marketing manager at OmniVision. "The OV493 gives Tier-1 OEMs an opportunity to reduce system cost, maintain high performance, and design distributed architectures for multiple driver-assistance systems."

Monday, May 22, 2017

Omnivision Introduces 2MP Automotive Sensor

PRNewswire: OmniVision introduced the OV2311, an automotive 2MP, 3um global shutter IR-enhanced image sensor for driver monitoring systems.

To combat distracted driving, the automotive industry is ramping up its development of driver monitoring systems and vehicle co-pilot applications, which in combination can allow the on-board computer to seize or relinquish control of the vehicle, based on the driver's state. NHTSA defines this setup as level 3 autonomy. Currently only available for luxury vehicles, these systems are expected to become a standard safety feature in the near future.

"The demand for driver monitoring systems is expected to increase significantly as more affordable technologies allow advanced semi-automated features to transition from high-end to mainstream vehicles," said Jeff Morin, automotive product marketing manager at OmniVision. "Possessing the same capability customarily found in much larger and more expensive sensors, the OV2311 aims to bring advanced driver monitoring systems to the masses by delivering high-level, cost-effective performance in a compact form factor."

Vision-based driver monitoring systems in semi-autonomous vehicles require highly sophisticated eye-tracking technology and imaging capabilities. The OV2311 achieves high NIR QE to minimize active illumination power.

The OV2311 is available for sampling, with volume production expected in Q4 2017.

Softkinetic ToF Gesture Control Powers BMW Series 5 and 7

PRNewswire: Softkinetic announces that BMW extended use of its ToF camera for gesture control in its Series 5 cars, in addition to the last year's Series 7.

"SoftKinetic is proud to expand our technology partnership with BMW Group to include both the BMW 7 and BMW 5 series cars," said Eric Krzeslo, CMO of SoftKinetic. "The infotainment gesture control we see in the BMW cars is just the beginning of the innovation we are bringing to the automotive market. Our technology can improve driver safety through driver assistance and monitoring and 3D vision cameras that ascertain the environment in and out of the vehicle at all times paving the way towards the fully autonomous vehicle."

Sunday, May 21, 2017

Intel Euclid Vision Computer

Intel keeps investing in its vision-based solution and capabilities. The recently announced Euclid Development Kit is a fully stand-alone computer integrating RealSense IR stereo depth camera, a fish eye camera, an RGB camera, an Atom x7-Z8700 Quad core CPU, microphone, GPS, WiFi, and Bluetooth to produce a compact all-in-one computer and depth camera in the size of a candy bar. It comes with a 2000mAh battery so it is completely stand alone.

Thanks to AM for the link!

CrucialTec In-Display Fingerprint Sensor Patent

KoreaHerald Investor reports that CrucialTec has been granted a US patent for its in-display fingerprint solution which the company calls DFS:

The company is talks with some global clients to commercialize the fingerprint tech in the whole area of a smartwatch screen and a certain part of a smartphone display,” a CrucialTec official said. The newly patented technology is said to feature three thin film transistors for each electrode to pick up high-resolution images, compared to one for each electrode in the existing fingerprint scanner. That configuration is said to maximize the sensing capability while maintaining high transparency level of the components.

In spite of mentioning 3T technology, some of the company's recent patent applications keep talking about 1T pixels:

MobileIDWorld, BiometricUpdate: In-display fingerprint sensing solutions are gathering quite a lot of attention recently. Goodix presented its solution at MWC in Barcelona this year. Synaptics and OXi Technology have been reported developing a similar technology some time ago. Apple is rumored to integrate a similar sensor in its future iPhone displays.

Saturday, May 20, 2017

Great Minds Think Alike

ST patent application US20170134683 "Global-shutter image sensor" by François Guyader and François Roy proposes a light screening layer to protect the charge storage note in an BSI GS pixel, fairly similar to TSMC proposal published two weeks ago:

General Electric patent application US20170135179 "Image sensor controlled lighting fixture" by Laszlo Balazs, Tamas Both, and Jean-marc Naud proposes integration of image sensor into a lightbulb: "The controller receives detection signal data from the image sensor and wide-angle lens component when a user is within a detection area associated with a view angle of the wide angle lens, and then determines the position of the user. The controller then controls the illuminance of the light source based on the position of the user." This is quite similar to the Cree proposal published a week ago:

Friday, May 19, 2017

Low-Cost 20,000 fps Film Camera from 1980s

DexterLab2013 publishes a nice educational video explaining the operation of 20,000fps Photec IV 16mm film camera. The camera is said to represent the state-of-the-art in low-cost high speed imaging in 1980s:

ON Semi Industrial Imaging Presentation

ON Semi publishes a video presentation on Python image sensors for industrial applications:

Thursday, May 18, 2017

Market Shares, View from China

Beijing, China-based Chlue Research publishes its "Global CMOS Image Sensor Market Report 2016" filled with a lot of interesting data from 2015. Here is a part about the market shares (Piart is meant to be Pixart, probably):

The 2017 version of the report is still behind the paywall.

Wednesday, May 17, 2017

SK Hynix Telecom Image Sensor Noise Generator Wins First Customers

IoTNow reports that Aeris becomes one of the first customers of SK Hynix Telecom image sensor-based quantum noise generator:

"SK Telecom’s chip operates on a process called quantum shot noise that generates mathematically proven random numbers.

Quantum shot noise is more than a buzzword; it’s a scientifically tested principle that relies on the ricochet of light waves that produce patterns that always are random and unique. SK Telecom’s security chip generates quantum shot noise with two LEDs inside the chip itself.

The LEDs generate photons that bounce around inside the chip and are detected by a Complementary Metal-Oxide-Semiconductor (CMOS) image sensor within the chip. The shot noise is the final image detected by the CMOS sensor, and it is this shot noise that truly is random.

The new security chip is estimated to cost a few dollars, making it a very attractive option for robust, future-proof security.

Tuesday, May 16, 2017

Imec Eliminates Image Sensor from Eye Tracker

Imec and Holst Centre (set up by imec and TNO) announce a technology to detect eye movement in real time based on electrical sensing aimed to virtual and augmented reality applications.

Today’s eye movement detection technology makes use of high-resolution cameras embedded in eye-tracking screens or glasses, already commercialized for numerous applications, including healthcare, research and gaming. While camera-based solutions can accurately determine where users are looking, most cameras’ frame rates are not fast enough to match the eye’s most rapid movements, such as saccades – a typical movement during reading. Using a more sophisticated camera that matches the eyes’ speed is said to significantly increase the cost of these devices and could have implications for their commercial use. Imec’s solution, based on electrical sensing, offers a much more inexpensive alternative, while solving the issue of the image processing delay.

Imec’s sensors were integrated into a set of glasses, with four built-in electrodes around each lens, two to pick up the eye’s vertical movement and two for horizontal movements. Parallel to that, an advanced algorithm was developed to translate the signals into a concrete position, based on the angle the eye is making with its central point of vision. Imec’s solution also offers insights on the eye’s behavior, like the speed of movement or the frequency and duration of blinks.

Human eyes have a natural electrical potential”, stated Gabriel Squillace, researcher in the Biomedical Applications & Systems group at imec. “At imec, we are leveraging this feature to develop the next-generation of eye-movement detection devices that can detect the eye’s position in real time at a five times lower cost and up to four times faster than what is currently available on the market. Imec’s ultimate goal is to develop a solution that can track the eye’s most rapid movements, such as saccades, enabling seamless real time tracking for AR and VR applications.

Sony Announces 1000fps Sensor Stacked on Top of Vision Processor

Sony announces the IMX382 high-speed vision sensor, which enables detection and tracking of objects at 1,000 fps. Sony begins sampling it in October 2017.

This vision sensor features a stacked configuration with a BI pixel array and signal processing circuit layer. The circuit layer is equipped with image processing circuits and a programmable column-parallel processor, delivering high-speed target detection and tracking. The new sensor uses information such as color and brightness obtained from pixels to detect objects, then extracts the object's centroid, moment and motion vector, and finally outputs the information from the vision sensor in each frame.

Sony Youtube video demos the new 1.27MP vision sensor capabilities:

Monday, May 15, 2017

Pixart Sales Rise

Digitimes reports that Pixart net profit rises 22.4% QoQ and 230% YoY in Q1 2017. The company expects its revenues to grow 10-15% sequentially in Q2 on increased sales for gaming notebooks and laser mice. The company's gross margin is expected to range from 53-54% in Q2 compared to 52.8% in Q1. Pixart product mix slowly shifts away from the optical mouse business:

Analysis of RGB + Mono Dual Camera for Low Light Photography

OSA Optics Express paper "Enhancement of low light level images using color-plus-mono dual camera" by Yong Ju Jung, Gachon University, Seongnam, Korea discusses "A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images."

Espros ToF Sensors Lineup

Espros Photonics presents its ToF image sensors lineup ranging from 8 x 8 pixel to QVGA resolution:

Sunday, May 14, 2017

Pseudorandom Pixels and Jaggies

IEICE Electronics Express, Vol.14, No.9 publishes a paper "CMOS image sensor with pseudorandom pixel placement for jaggy elimination" by Junichi Akita, Kanazawa University, Japan. This quite an old idea has been implemented all the way to a silicon and its advantage has been demonstrated:

Image Sensor in Every Lightbulb

Cree, one of the world's largest illumination LED manufacturers, files a patent application US20170127492 "Lighting fixture with image sensor module" by Robert D. Underwood, John Roberts proposing an integration of image sensor into a LED lightbulb:

"the derived image data is used to determine an ambient light level in an area surrounding the lighting fixture. In particular, a mean light intensity for a number of zones (i.e., a zoned mean light intensity) within a frame (or a number of frames) of the image data may be obtained from the image sensor module and used to determine the ambient light level. In one embodiment, zones within the frame of image data for which the mean light intensity is a predetermined threshold above the mean light intensity for one or more other zones may be ignored (not included in the average) to increase the accuracy of the determined ambient light level.

In one embodiment, the derived image data is used to determine an occupancy event within the area surrounding the lighting fixture. In particular, a mean light intensity for a number of zones within the image data may be obtained from the image sensor module and used to determine whether an occupancy event has occurred.

Saturday, May 13, 2017

Qualcomm Introduces its Iris Authentication Solution

Snapdragon 835 features Iris Authentication software solution:

Qualcomm explains:

"Due to the nature of information that needs to be captured for iris authentication (the iris has 225 points of comparison), an infrared (IR) LED illuminates the iris while the IR camera captures the iris image. What’s displayed, however, is a color preview captured by a front-facing camera. It’s a far less spooky-looking image, allowing for a better user experience.

Our iris authentication solution uses the Snapdragon 835’s camera security framework. Camera security is engineered to help provide hardware-based authentication and prevent the iris image from being captured by malware. This means the malware cannot fake authentication and replay the captured image into the Trusted Execution Environment. Both the IR image capture and RGB preview take advantage of camera security capabilities; each time the user scans her iris to open the phone or pay for something, the path is then hardened, so it becomes increasingly difficult for the image to be captured and used for nefarious reasons.

The iris authentication solution takes advantage of the Snapdragon 835’s processing power, which is engineered to deliver an unlock speed of less than 100ms. The solution makes quick work of the verification even if you’re wearing glasses, sunglasses, or even contact lenses. That’s because the iris authentication solution is capable of recognizing a variety of eyewear and extracting only the necessary information while ignoring the rest. And it’s designed to work well in indoor and outdoor environments so you can unlock your phone just as easily in bright and dark lighting conditions.

All iris authentication systems are vulnerable to attack by spoofing. For example, hackers might present fake iris photographs or videos of an authorized user and attempt to gain access to a device. Qualcomm Technologies’ iris authentication solution is designed to help withstand such attacks through an anti-spoofing mechanism known as liveness detection. Liveness detection solutions work to ensure that the iris being presented to the device is that of an authorized (and living) user.

Friday, May 12, 2017

Image Sensor Papers at VLSI Symposia 2017

VLSI Symposia to be held on June 5-8, 2017 in Kyoto, Japan, presents its programs with a significant image sensors content:

An All Pixel PDAF CMOS Image Sensor with 0.64μm×1.28μm Photodiode Separated by Self-Aligned In-Pixel Deep Trench Isolation for High AF Performance,
S. Choi, K. Lee, J. Yun, S. Choi, S. Lee, J. Park, E. S. Shim, J. Pyo, B. Kim, M. Jung, Y. Lee, K. Son, S. Jung, T.-S. Wang, Y. Choi, D.-K. Min, J. Im, C.-R. Moon, D. Lee and D. Chang, Samsung, Korea
We present a CMOS image sensor (CIS) with phase detection auto-focus (PDAF) in all pixels. The size of photodiode (PD) is 0.64μm by 1.28μm, the smallest ever reported and two PDs compose a single pixel. Inter PD isolation was fabricated by deep trench isolation (DTI) process in order to obtain an accurate AF performance. The layout and depth of DTI was optimized in order to eliminate side effects and maximize the performance even at extremely low light condition up to 1lux. In particular the AF performance remains comparable to that of 0.70μm dual PD CIS. By using our unique technology, it seems plausible to scale further down the size of pixels in dual PD CIS without sacrificing AF performance.

A Shutter-Less Micro-Bolometer Thermal Imaging System Using Multiple Digital Correlated Double Sampling for Mobile Applications,
S. Park, T. Cho, M. Kim, H. Park and K. Lee, KAIST and Seoul National Univ. of Science and Technology, Korea
A micro-bolometer focal plane array (MBFPA)-based long wavelength Infra-red thermal imaging sensor is presented. The proposed multiple digital correlated double sampling (MD-CDS) readout method employing newly designed reference-cell greatly reduces PVT variation-induced fixed pattern noise (FPN) and as a result features much relaxed calibration process, easier TEC-less operation and Shutter-less operation. The readout IC and MBFPA was fabricated in 0.35um CMOS and amorphous silicon MEMS process respectively. The fabricated MBFPA thermal imaging sensor has NETD performance of 0.1 kelvin even though the mechanical shutter is not used.

Trantenna: Monolithic Transistor-Antenna Device for Real-Time THz Imaging System,
M. W. Ryu, R. Patel, S. H. Ahn, H. J. Jeon, M. S. Choe, E. Choi, K. J. Han and K. R. Kim, UNIST, Korea
We report a circular-shape monolithic transistor-antenna (trantenna) for high-performance plasmonic terahertz (THz) detector. By designing an asymmetric transistor on a ring-type metal-gate structure, more enhanced (45 times) channel charge asymmetry has been obtained in comparison with a bar-type asymmetric transistor of our previous work. In addition, by exploiting ring-type transistor itself as a monolithic circular patch antenna, which is designed for a 0.12-THz resonance frequency, we demonstrated the highly-enhanced responsivity (Rv) > 1 kV/W (x 5) and reduced noise-equivalent power (NEP) < 10 pW/Hz0.5 (x 1/10).

Chip-Scale Fluorescence Imager for In Vivo Microscopic Cancer Detection,
E. P. Papageorgiou, B. E. Boser and M. Anwar, Univ. of California, Berkeley and Univ. of California, San Francisco, USA
Modern cancer treatment faces the pervasive challenge of identifying microscopic cancer foci in vivo, but no imaging device exists with the ability to identify these cells intraoperatively, where they can be removed. We introduce a novel CMOS sensor that identifies foci of less than 200 cancer cells labeled with fluorescent biomarkers in 50ms. The sensor’s miniature size enables manipulation within a small, morphologically complex, tumor cavity. Recognizing that focusing optics traditionally used in fluorescence imagers present a barrier to miniaturization, we integrate stacked CMOS metal layers above each photodiode to form angle-selective gratings, rejecting background light and deblurring the image. A high-gain capacitive transimpedance amplifier based pixel with 8.2V/s per pW sensitivity and a dark current minimization circuit enables rapid detection of microscopic clusters of 100s of tumor cells with minimal error.

A 4.1Mpix 280fps Stacked CMOS Image Sensor with Array-Parallel ADC Architecture for Region Control,
T. Takahashi, Y. Kaji, Y. Tsukuda, S. Futami, K. Hanzawa, T. Yamauchi, P. W. Wong, F. Brady, P. Holden, T. Ayers, K. Mizuta, S. Ohki, K. Tatani, T. Nagano, H. Wakabayashi, and Y. Nitta, Sony, Japan and Sony, USA
A 4.1Mpix 280fps stacked CMOS image sensor with array-parallel ADC architecture is developed for region control applications. The combination of an active reset scheme and frame correlated double sampling (CDS) operation cancels Vth variation of pixel amplifier transistors and kTC noise. The sensor utilizes a floating diffusion (FD) based back-illuminated (BI) global shutter (GS) pixel with 4.2e-rms readout noise. An intelligent sensor system with face detection and high resolution region-of-interest (ROI) output is demonstrated with significantly low data bandwidth and low ADC power dissipation by utilizing a flexible area access function.

A 256 Energy Bin Spectrum X-Ray Photon-Counting Image Sensor Providing 8Mcounts/s/pixel and On-Chip Charge Sharing, Charge Induction and Pile-Up Corrections,
A. Peizerat, J.-P. Rostaing, P. Ouvrier-Buffet, S. Stanchina, P. Radisson, and E. Marché, CEA-LETI and Multix, France
To achieve better and faster material discrimination in applications like security inspection, X-Ray image sensors giving a highly resolved energy spectrum per pixel are required. In this paper, a new pixel architecture for spectral imaging is presented, exhibiting a 256 bin spectrum per pixel in a single image duration, up to two orders of magnitude higher than previous works. A prototype circuit, composed of 4x8 pixels of 756μmx800μm and hybridized to a CdTe crystal, was fabricated in a 0.13μm process. Our pixel architecture has been measured at 8 Mcounts/s/pixel while embedding on-chip charge sharing, charge induction and pile-up corrections.

A 0.61 E- Noise Global Shutter CMOS Image Sensor with Two-Stage Charge Transfer Pixels,
K. Yasutomi, M. W. Seo, M. Kamoto, N. Teranishi and S. Kawahito, Shizuoka Univ., Japan
A low-noise global shutter (GS) CMOS image sensor (CIS) with two-stage charge transfer (2-CT) structure is presented. The low-noise wide dynamic range performance of the proposed pixel has been demonstrated by using column-parallel folding integration (FI)/cyclic ADCs. The GS image sensor with 5.6μm-pitch 1200 x 900 pixels is implemented with a 0.11μm CIS technology. The noise and dynamic range are measured to be 0.61 erms and 81 dB, respectively.

224-ke Saturation Signal Global Shutter CMOS Image Sensor with In-Pixel Pinned Storage and Lateral Overflow Integration Capacitor,
Y. Sakano, S. Sakai, Y. Tashiro, Y. Kato, K. Akiyama, K. Honda, M. Sato, M. Sakakibara, T. Taura, K. Azami, T. Hirano, Y. Oike, Y. Sogo, T. Ezaki, T. Narabu, T. Hirayama, and S. Sugawa, Sony and Tohoku Univ., Japan
The required incorporation of an additional in-pixel retention node for global shutter complementary metal-oxide semiconductor (CMOS) image sensors means that achieving a large saturation signal presents a challenge. This paper reports a 3.875-μm pixel single exposure global shutter CMOS image sensor with an in-pixel pinned storage (PST) and a lateral-overflow integration capacitor (LOFIC), which extends the saturation signal to 224 ke, thereby enabling the saturation signal per unit area to reach 14.9 ke/μm2. This pixel can assure a large saturation signal by using a LOFIC for accumulation without degrading the image quality under dark and low illuminance conditions owing to the PST.

320x240 Back-Illuminated 10µm CAPD Pixels for High Speed Modulation Time-of-Flight CMOS Image Sensor,
Y. Kato, T. Sano, Y. Moriyama, S. Maeda, T. Yamazaki, A. Nose, K. Shina, Y. Yasu, W. van der Tempel, A. Ercan and Y. Ebiko, Sony, Japan and SoftKinetic, Belgium
A 320x240 back-illuminated Time-of-Flight CMOS image sensor with 10µm CAPD pixels has been developed. The backilluminated (BI) pixel structure maximizes the fill factor, allows for flexible transistor position and makes the light path independent of the metal layer. In addition, the CAPD pixel, which is optimized for high speed modulation, results in 80% modulation contrast at 100MHz modulation frequency.

An Imager Using 2-D Single-Photon Avalanche Diode Array in 0.18-μm CMOS for Automotive LIDAR Application,
H. Akita*, I. Takai, K. Azuma, T. Hata and N. Ozaki, DENSO and Toyota, Japan
A feasibility imager chip of a 32 x 4-pixel array was developed in a 0.18-μm CMOS process for a small size automotive laser imaging detection and ranging. Each pixel consists of 8 single-photon avalanche diodes as a world-first 2-D pixel array with digital output macro pixel architecture which enables laser signal sensing under sunlight noise. Distance measurement results show less than 2.1% nonlinearity and 0.11-m standard deviation up to 20-m distance with 10%-reflective target under the ambient light of 75 klux.

A 16.5 Giga Events/s 1024 × 8 SPAD Line Sensor with Per-Pixel Zoomable 50ps-6.4ns/bin Histogramming TDC,
A. T. Erdogan, R. Walker, N. Finlayson, N. Krstajić, G. O. S. Williams and R. K. Henderson, Univ. of Edinburgh, UK
A 1024 × 8 single photon avalanche diode (SPAD) based line sensor for time resolved spectroscopy is implemented in 0.13μm imaging CMOS with 23.78 μm pixel pitch at 49.31% fill factor. The line sensor can operate in single photon counting (SPC) mode (65 giga-events/s), time-correlated single photon counting (TCSPC) mode (194 million events/s) or histogramming mode (16.5 giga-events/s), increasing the count rate up to 85 times compared to TCSPC operation. This performance is enabled by a 512 channel histogramming TDC with 50ps-6.4ns/bin zoomable time resolution.

A 272.49 pJ/pixel CMOS Image Sensor with Embedded Object Detection and Bio-Inspired 2D Optic Flow Generation for Nano-Air-Vehicle Navigation,
K. Lee, S. Park, S.-Y. Park, J. Cho and E. Yoon, Univ. of Michigan, USA
We report a CMOS imager embedded with energy-efficient object detection and bio-inspired 2D optic flow generation cores for navigation of nano-air-vehicles (NAVs). The proposed vision-based navigation system employs spatial difference imaging and gradient orientation using mixed-signal circuits to achieve both energy-efficient and area-efficient implementation. The system achieved 272.49 pJ/pixel with 75% reduction in memory size for integrated operation of object detection and 2D optic flow generation.

Demo Sessions:
  • A Shutter-Less Micro-Bolometer Thermal Imaging System Using Multiple Digital Correlated Double Sampling for Mobile Applications,
    S. Park, T. Cho, M. Kim, H. Park, and K. Lee, KAIST and Seoul National Univ. of Science and Technology, Korea
  • A 4.1Mpix 280fps Stacked CMOS Image Sensor with Array-Parallel ADC Architecture for Region Control,
    T. Takahashi, Y. Kaji, Y. Tsukuda, S. Futami, K. Hanzawa, T. Yamauchi, P. W. Wong, F. Brady, P. Holden, T. Ayers, K. Mizuta, S. Ohki, K. Tatani, T. Nagano, H. Wakabayashi, and Y. Nitta, Sony, Japan and Sony, USA
  • 320x240 Back-Illuminated 10µm CAPD Pixels for High Speed Modulation Time-of-Flight CMOS Image Sensor,
    Y. Kato, T. Sano, Y. Moriyama, S. Maeda, T. Yamazaki, A. Nose, K. Shina, Y. Yasu, W. van der Tempel, A. Ercan and Y. Ebiko, Sony, Japan and SoftKinetic, Belgium

Himax Keeps Investing in 3D Sensing

Himax announces its Q1 2017 results. The company updates on its image sensing business:

"With respect to the non-driver business, particularly in the WLO and CMOS Image Sensor products of 3D scanning solutions, we believe it is one of the most significant new applications for the next generation smartphone. Himax is well recognized to be the front runner and world leader in this important technology. Our SLiMTM product line is the state of the art total solutions for 3D sensing and scanning based on structured light technology, of which we can also provide individual technologies separately to selected customers to accommodate their specific needs. We are seeing strong demand for 3D scanning products from multiple top name customers who are either collaborating with us or engaging us for advanced stage discussions. In light of the promising new business opportunities around the corner, we will continue to invest heavily in R&D and customer engineering regardless of the prevailing unfavorable business conditions. We are aware that this will hit our short-term bottom line, but we believe such investment is extremely important and will bring in very handsome return in the next few years.

Sales of CMOS image sensors will deliver double-digit growth in the second quarter...

On the CMOS image sensor business update, the Company continues to make great progress with its two machine vision sensor product lines, namely near infrared (“NIR”) sensor and Always-on-Sensor (“AoS”). The Company’s NIR sensor is a critical part in the structured light 3D scanning total solution. Similar to WLO, the Company can supply NIR sensor as an individual component for both mobile and non-mobile applications. The Company’s NIR sensors’ overall performance is far ahead of those of its peers. Himax currently can offer low noise HD and 5.5 megapixel NIR sensors with superior quantum efficiency in NIR band while operating at excellent power consumption.

Himax’s AoS solutions provide super low power computer vision to enable new applications across a very wide variety of industries. The ultra-low power, always-on vision sensor is a powerful solution capable of detecting, tracking and recognizing its environment in an extremely efficient manner using just a few milliwatts of power. In April, Himax announced a strategic investment in Emza, an Israeli software company dedicated to developing extremely efficient machine vision algorithms. The investment enables the Company to provide turn-key solutions to meet customers’ increasing appetite for ultra-low power. With Emza’s machine-vision algorithms, the Company can transform AoS sensor from a pure image capturing component to an information analytics device that can be easily integrated into smart home and security applications as well as smartphone, AR/VR, AI and IoT devices.

For the traditional human vision segments, the Company expects mass production of several earlier design wins for notebooks and increased shipments for multimedia applications such as car recorder, surveillance, drone, home appliances, and consumer electronics, among others, during the second quarter.

Embedded Vision Summit Presentations

Embedded Vision Alliance opens access to the presentations of the Embedded Vision Summit held on May 1-3, 2017 in Santa Clara, CA (free registration and 207MB of hard disk space for presentations are required). Few interesting bits from various presentations:

Intel previews its 300m long range RealSense depth camera:

By now, Intel depth cameras lineup is probably the broadest in the industry:

Fotonation shows that camera and image processing are by far the most power consuming parts of a smartphone:

Fotonation proposes its HW accelerators to reduce power:

Microsoft keynote talks about the Hololens sensor architecture:

Sony plans to leverage its future image sensors speed and lower power:

Thursday, May 11, 2017

Invention and Innovation in Image Sensors

Dartmouth publishes Eric Fossum talk on invention and innovation in image sensors, starting from the early days of imaging and all the way to Quanta sensor and Gigajot startup:

ON Semi Announces RGB-IR Versions of its 2MP Sensors

BusinessWire: ON Semi announces RGB-IR versions of its 1/2.7-inch 1080p60 AR0237 and AR0238 RGB-IR CMOS sensors for home security and automated monitoring applications. These have the ability to capture daytime color and nighttime NIR image data on the same sensor without the cost and complexities (like refocusing, maintenance etc.) that can result from having a mechanical IR-cut filter mounted onto the imaging assembly. Their RGB-IR CFA arrangement has the 4x4 kernel replacing some R and B pixels with NIR-sensitive pixels and rearranges the spatial density of the remaining pixels.

Omnivision on Pixel Design and Optimization

Electronic Imaging 2017 publishes Omnivision's CTO, Boyd Fowler, presentation "CMOS Image Sensor Pixel Design and Optimization." Few slides from the presentation:

Meanwhile, Electronic Imaging 2018 calls for image sensor papers for the next year conference to be held on January 28 - February 1, 2018 in Burlingame, California.

Thanks to AD for the links!

Wednesday, May 10, 2017

EyeLock Single Camera For Acquiring Iris Biometrics And Face Images

PRNewswire: EyeLock LLC announces that the USPTO has issued Patent 9,646,217 on a single camera to acquire iris biometrics as well as a face image by providing suitable illumination adjustments between the two acquisitions:
  • It uses a single image sensor to acquire a face image and an iris image suitable for iris recognition;
  • determines the distance of the subject from the image sensor, so that suitable illumination levels can be provided even when the two types of images are being captured in quick succession;
  • links the face image to the iris image, which allows the face image to be used for liveness confirmation of the iris biometrics; and
  • allows the acquisition of the face image to serve as biometric deterrence and the face image to be optionally stored for future dispute resolution.
The EyeLock reference designs are said to have working distances of up to 60 cm with a false accept rate of 1 in 1.5 million for single eye authentication and a false reject rate of less than 1%.

From the patent: "Since reflectance of a face is different from that of an iris, acquiring an image of an iris and a face from the same person with a single sensor according to prior methods and systems has yielded poor results. Past practice required two cameras or sensors or, in the cases of one sensor, the sensor and illuminators were operated at constant settings.

We have discovered a method and related system for carrying out the method which captures a high quality image of an iris and the face of a person with single sensor or camera having a sensor by acquiring at least two images with small time elapse between each acquisition by changing the sensor or camera settings and/or illumination settings between the iris acquisition(s) and the face acquisition(s).

Sony on Challenges of Image Sensor Development in 2010

ISSCC publishes a video of 2010 plenary talk of Tomoyuki Suzuki, Sony SVP of image sensor BU:

Tuesday, May 09, 2017

Qualcomm Expands High End ISP Features to its Mid-Range Application Processors

Qualcomm announces Snapdragon 660 and 630 mobile platforms integrating its Spectra ISP. The new mid-range processors have quite advanced camera features:
  • The 14-bit Qualcomm Spectra 160 ISP supports up to 24MP single ISP images with zero shutter lag
  • Smooth zoom, fast autofocus
  • 4K video capture and playback at 30fps
  • The Snapdragon 630 supports up to 13MP + 13MP dual cameras, while the 660 bumps that up to 16MP + 16MP.
  • The 660 is also equipped with the Qualcomm Hexagon 680 DSP, which incorporates HVX. HVX is designed to support higher camera performance by taking on tasks traditionally handled by the ISP through advanced computer vision and powerful yet efficient image processing.

TowerJazz Reports CIS Business Growth

SeekingAlpha: TowerJazz Q1 2017 report says that CIS business is one of the main drivers of the company growth:

"With regard to the CIS business units, in the first quarter we continued to grow our business mainly in the industrial sensor and medical X-Ray sensor markets. We saw and continue to see a growth in customers demand. This is in combination of our customer market share growth combined with the growth of the total market.

In the industrial market, we already announced the release of the 2.8 micron state of the art global shutter pixel. This pixel is being used by our lead customers of which each of the announced release of their end family of high resolution global shutter sensors. The technology is available in our RI 110 nanometer Fab in Japan and will become available in Mid Atlantic [ph] as well in 2018.

This allows us to support large format high resolution state of the art global shutter sensors. In parallel we developed a near IR version for this technology mainly to support sensitivity and low light conditions as well as to support the 3D and just direct condition market. The main growth of our global shutter technology is another 8-inch Fab namely RI in Japan and Fab 2 in Israel. However we're now transferring this technology to our Uozu 12-inch 65 nanometer line to support other high volume applications.

We continue to grow our high end photography market share both in still sensors or DSOR and mirror less cameras and in video, namely in the cinematography and broadcasting video segments. The main growth activity in these areas are in our 12-inch line. The automotive market growth is a major focus for CIS business namely in the LiDAR, light based radars that is a key component for the autonomous driving vehicles.

This requires high sensitivity, accurate and low cost LiDAR which requirement can be met with our Avalanche photodiode technology. We expect to see steady growth in the CIS business unit throughout 2017 driven by the industrial, medical security and high end photography market segments or getting more design wins in the automotive and augmented and virtual reality segments, which both segments are contend on our 3D time applied or structure light disk based technology. We believe that these sectors will continue to drive our growth over multiple years.

The Q&A session talks about the foundry business in 3D imaging:

Q: "Related image sensor business, you talked about time light structure, light sensor for 3D sensing, can you just get your sense of when that market starts to take for your and any comments you can make about the breadth of your customer base?"

Russell Ellwanger, CEO: "So we've press released on that in the past. We have several customers that we deal with 3D, with gesture control, the market I think is taking off, it's not the biggest portion of our revenue from CIS, but it is a market that's growing. We have several new customers that are not in volume production at this point, would promise to be extremely powerful growth drivers for our market probably within one to two year time period, but again they're not press released."

Monday, May 08, 2017

ON Semi Announces Q1 2017 Results

ON Semi announces its Q1 2017 results. Here is the image sensor business update:

"For the first quarter, image sensor revenue related to ADAS and viewing applications grew at an impressive high-teen percentage rate quarter over quarter. Our design win pipeline continues to grow for ADAS and active safety applications, and we expect robust growth in our ADAS related revenue to continue... we are making investments in ADAS to further accelerate our growth in this market.

We all well positioned to benefit from growth in machine visions and industrial automation applications with our CMOS and CCD image sensors. Our Python line of CMOS image sensors for machine vision applications continues to grow at an impressive rate. We have been repurposing products from our consumer end-market for industrial applications, which drive higher margins and long-term sustainable revenues.

SeekingAlpha publishes ON Semi earnings call transcript with a question about Aptina business:

Q: "Perhaps, you guys can talk about new Aptina capacity, both front-end and back-end. I think, SMIC was putting some more online from you – for you guys. Is that up and running now? And if not, where are we with capacity? What kind of utilization do we have for Aptina specifically?"

Keith D. Jackson, CEO: "We're in a very good situation with capacity for image sensing business, running less than 70% full globally on an overall basis."

Sunday, May 07, 2017

Tesla Recreated Mobileye Tech in 6 Months

SeekingAlpha: Elon Musk, Tesla CEO, says in the company's Q1 2017 earnings call:

"We had a bit of a dip, obviously because of the unexpectedly rapid transition away from MobilEye, where we would expect it to have the MobilEye chip on the board as we transition, but MobilEye refused to allow that, so then we had to basically recreate all the MobilEye functionality in about six months, which we did."

Saturday, May 06, 2017

Weekly Patent Review

There have been quite a few interesting patent applications published this week, a rare statistical fluctuation.

ST LED flicker-reducing HDR sensor patent application US20170118424 "High dynamic range image sensor" by Tarek Lule, Jérôme Chossat, and Benoît Deschamps includes different sized photodiodes, possibly for the next version of their automotive HDR sensor:

Samsung patent application US20170118450 "Low-light image quality enhancement method for image processing device and method of operating image processing system performing the method" by Yong Ju Jung, Kwang Hyuk Bae, Chae Sung Kim, and Joon Seo Yim resolves the tradeoffs between RGB and RWB color patterns - use both and combine their outputs:

Pixart patent application US20170118418 "Dual-aperture ranging system" by Guo-zhen Wang, Yi-lei Chen seems to follow the steps of Dual Aperture startup ideas, just 3 years later: