This website uses cookies. By using this site, you consent to the use of cookies. For more information, please take a look at our Privacy Policy.

What is Vehicle Camera

Jan 04, 2023      View: 341

01. Camera structure


Camera generally consists of Lens, Image Sensor, Image SignalProcessor (ISP), Serializer, as shown in the figure below, the data transmission steps are the basic information of the object collected by the lens and then handed over to the ISP after certain processing by the Image Sensor. The data is transmitted serially. The transmission method can also be divided into LVDS-based transmission over coaxial cable or twisted pair cable or direct transmission over Ethernet.

camera structure

02. Impact of Viewing Angle


For in-vehicle cameras, the current technical development of color, resolution and frame rate can basically meet the needs of autonomous driving software. For example, AP2.5 uses 1080P (about 2Mpixel) with 30fps, as a comparison, Xiaomi 10 camera uses 108M pixel with 60fps camera. For the arrangement, it is mainly the effect of the viewing angle on the perceived range. In the case of camera sensor size is determined, the longer the focal length, the narrower the corresponding viewing angle. But the corresponding resolution can also be greatly improved - i.e., you can see clearly, but see less.


viewing angle of vehicle camera

For autonomous driving, not only do you need to pay attention to distant road signs, traffic lights, signs, vehicles and other information to do route planning and pre-control judgment, you also need to observe whether there are pedestrians in the closer distance, whether there are vehicles coming in at the intersection and collect information about vehicles next to the queue, pedestrians, bicycles, etc. for risk pre-control.


The current single camera is unable to achieve a full field of view information collection. Therefore, the L2 level and above will basically be equipped with medium-range and long-range cameras. High-grade vehicles will use 3 front view camera configuration.


03. The Main Role of Vision Sensors in the Autonomous Driving System


Obstacle detection: speed and distance measurement using binocular or trinocular lane line detection: lane line extraction road information reading: traffic signal recognition, traffic sign recognition map construction and auxiliary positioning other traffic participants detection and identification - vehicle detection, pedestrian detection, animal detection


04. Monocular Camera and Binocular Camera Ranging Principle Comparison


The principle of monocular camera distance measurement is to match recognition first and estimate distance later: the target category is recognized by image matching, the target size is estimated, and then the distance is estimated according to the image size. The advantages of monocular system are low cost, low requirement for computing resources, and relatively simple system structure; the disadvantages are: (1) need to constantly update and maintain a large sample database to ensure the system achieves a high recognition rate; (2) unable to judge non-standard obstacles; (3) distance is not really measured in the sense of accuracy is low. The current industry mass production to do the best error in about 10-15%.


Binocular camera distance measurement principle is the principle of binocular triangulation, the distance perception of the target object is an absolute measurement, rather than an estimate, the principle is shown in the figure below.


binocular camera distance measurement principle


Binocular system advantages.


1) The cost is higher than monocular system, but lower than the cost of LIDAR and other solutions;

2) The principle does not require identification before measurement, but direct measurement of all obstacles;

3) No need to maintain a sample database.


Disadvantages of binocular system: 1) large computational complexity, high difficulty of large-scale commercialization; 2) very sensitive to environmental lighting, which poses a great challenge to the algorithm; 3) not suitable for scenes with monotonous lack of texture. For scenes close to the background color (sky, white wall, desert, etc.) may not be recognized; 4) camera baseline limits the measurement range, while the accuracy of the installation and the size deviation of the optical center has a great impact on the measurement results, and durable consistency is difficult to ensure.


05. Camera Data Transmission


There are two types of camera data processing in the mass-produced models, one is to integrate the controller at the camera for feature extraction and output Typical application is Mobileye's EyeQ series. For example, EyeQ4 is designed to process 8 channels of camera data at the same time. The original data is processed in EyeQ4, which can extract lane line data, 10 pedestrian and vehicle target information, traffic signs, its own body attitude, and range information (monocular algorithm).


The data is finally sent to the central processor via CAN-FD. Another typical application of camera output raw data is TeslaHW2.5. HW2.5 uses a dual NvidiaParker SoC plus GP106 graphics card computing unit, which is capable of processing 12 channels of camera data.


There are two general options for data transfer between the camera and vision processor: serial interface and Ethernet. The serial interface is currently commonly used. The physical layer for serial transmission of camera data uses the LVDS (Low Voltage Differential Signal) interface. It has the characteristics of high speed (Gbps level), low latency and low power consumption. The main solutions for the protocol layer are TI's FPD-Link, Maxim's GMSL, etc.


The current transmission protocols in the market are proprietary and cannot be used by different vendors. It is generally up to the controller platform to decide which protocol layer to choose. For example, HW2.5 uses the Nvidia solution so the GMSL protocol is chosen, while on HW3.0 FSD chooses the TI solution.


The principle of LVDS transmission: Through precise wire harness impedance matching, the differential voltage is obtained in the form of small current to obtain ultra-high speed transmission rate. (>1.8Gbps@15m, 3G@10m) The use of coaxial cable can effectively avoid vehicle-grade EMC risks while reducing costs. Data transmission adopts point-to-point bi-directional communication. Each camera needs to be equipped with an independent decoding chip, and the controller side also needs to be equipped with an independent decoding chip at the receiving end of each camera channel.


Previous: Introduction to the In-vehicle Camera Industry

Next: Photovoltaic Energy Storage System Helps Electric Vehicles Achieve Fast Charging