menu
How far is driverless technology from us?
The self-driving technology in science fiction movies makes everyone fascinated. In recent years, with the improvement of artificial intelligence technology, driverless cars have become a reality from our fantasy. Car companies and Internet companies have poured into this new field.

In recent years, with the improvement of artificial intelligence technology, driverless cars have become a reality from our fantasy. Car companies and Internet companies have poured into this new field.

Why did the sensors fail to detect pedestrians? What kind of technical solution is the driverless car? Let's find out.

Driverless technology

A driverless car is a smart car, also known as a wheeled mobile robot, which mainly relies on a computer-based intelligent driver in the car to achieve unmanned driving. A driverless car is a smart car that senses the road environment through the on-board sensing system, automatically plans the driving route and controls the vehicle to reach the predetermined target.

It integrates many technologies such as automatic control, architecture, artificial intelligence, and visual computing. It is the product of the highly developed computer science and an important symbol for measuring a country's scientific research strength and industrial level.

By equipping vehicles with intelligent software and a variety of sensing devices, including on-board sensors, radars and cameras, the vehicle can drive autonomously and safely, reach its destination safely and efficiently, and achieve the goal of completely eliminating traffic accidents. United States (National Highway Traffic Safety Administration) ) class defines the automation level of the car.

Level 0: driven by a driver;

Level 1: with more than one automatic control function (such as adaptive cruise and lane keeping system, etc.);

Level 2: Cars as the main body to perform a variety of operational functions;

Level 3: Instruct the driver to switch to manual driving when car-based driving is not feasible;

Level 4: Fully unmanned.

Driverless technology mainly consists of the following technologies:

1. Lane Keeping System

When driving on the road, the system can detect the left and right lane lines. If a yaw occurs, the lane keeping system will remind the driver through vibration, and then automatically correct the direction and assist the vehicle to keep it in the middle of the road.

2. ACC adaptive cruise system or laser ranging system

Adaptive Cruise Control (ACC) is an automotive feature that allows the vehicle's cruise control system to adapt to traffic conditions by adjusting its speed.

The radar installed in front of the vehicle is used to detect the presence of slower vehicles in the road ahead of the vehicle. If a slower vehicle is present, the ACC system reduces the speed and controls the gap or time gap with the vehicle ahead. If the system detects that the vehicle ahead is not on the road of the vehicle, it will increase the speed of the vehicle to return to the previously set speed. This action enables autonomous deceleration or acceleration without driver intervention. The primary way ACC controls vehicle speed is through engine throttle control and proper braking.

3. Night vision system

A night vision system is an automotive driver assistance system derived from military use. With the help of this assistance system, the driver will gain a higher foresight ability during driving at night or in low light, and it can provide the driver with more comprehensive and accurate information or issue early warning of potential dangers

4. Precise positioning/navigation system

Self-driving cars rely on very accurate maps to determine location, because with just GPS technology there will be deviations. Before autonomous cars hit the road, engineers collect road conditions while driving, so autonomous cars can compare real-time data with recorded data, which helps it distinguish pedestrians from roadside objects.

Sensor systems for driverless cars

The realization of driverless cars requires a lot of scientific and technical support, the most important of which is the positioning of a large number of sensors. The core technology includes various modules including high-precision map, positioning, perception, intelligent decision-making and control. There are several key technical modules, including precise GPS positioning and navigation, dynamic sensing obstacle avoidance system, and mechanical vision. Multiple designs. The sensor system is shown in the figure.

Precise GPS positioning and navigation

Unmanned vehicles put forward new requirements for GPS positioning accuracy and anti-interference. The GPS navigation system needs to continuously locate the unmanned vehicle when it is unmanned. In this process, the GPS navigation system of the driverless car requires that the GPS positioning error does not exceed one body width.

Another challenge facing driverless cars is the need to ensure that they can navigate flawlessly. The main technology for this is GPS technology, which is very widely used in life today. Because GPS has no accumulated error and automatic measurement characteristics, it is very suitable for navigation and positioning of driverless vehicles.

In order to greatly improve the accuracy of GPS measurement technology, the system adopts position differential GPS measurement technology. Compared with the traditional GPS technology, the differential GPS technology will calculate the difference between the observation amount of one observation station to two targets, the observation amount of two observation stations to a target, or the two measurements of one station to a target, The purpose is to eliminate common sources of error, including ionospheric and tropospheric effects.

The principle of position difference is one of the simplest difference methods, and any kind of GPS receiver can be modified and composed of this kind of difference system.

The GPS receiver installed on the base station can perform three-dimensional positioning after observing 4 satellites, and calculate the coordinates of the base station. Due to orbital errors, clock errors, SA effects, atmospheric effects, multipath effects and other errors, the calculated coordinates are different from the known coordinates of the reference station, and there are errors. The base station uses the data link to send this correction number, which is received by the user station and corrects the coordinates of the user station calculated by it.

The final corrected user coordinates have eliminated the common errors of the base station and the user station, such as satellite orbit error, SA influence, atmospheric influence, etc., which improves the positioning accuracy. The above prerequisite is that the base station and the user station observe the same set of satellites. The position difference method is suitable for the situation where the distance between the user and the base station is less than 100km. Its principle is shown in the figure.

High-precision vehicle body positioning is a prerequisite for unmanned vehicles to drive. With the existing technology, differential GPS technology can be used to complete the precise positioning of unmanned vehicles, which basically meets the needs.

Dynamic Sensing Obstacle Avoidance System

As a kind of land-wheeled robot, driverless car has great similarities and differences with ordinary robots. First of all, as a car, it needs to ensure the comfort and safety of occupants, which requires stricter control of its driving direction and speed; Smooth driving requires high requirements for dynamic information acquisition of surrounding obstacles. Many autonomous vehicle research teams at home and abroad detect dynamic obstacles by analyzing laser sensor data.

The autonomous vehicle "Junior" at Stanford University uses laser sensors to model the geometric features of the motion of the tracked targets, and then uses a Bayesian filter to update the state of each target individually; "BOSS" at Carnegie Mellon University uses laser sensor data from Extract obstacle features, and detect and track dynamic obstacles by correlating laser sensor data at different times.

In practical applications, the 3D laser sensor has a relatively small delay due to the large data processing workload, which reduces the ability of driverless cars to respond to dynamic obstacles to a certain extent, especially driverless cars. The moving obstacles in the front area pose a great threat to its safe driving; although the ordinary four-line laser sensor has a faster data processing speed, its detection range is small, generally between 100° and 120°; in addition, A single sensor also has low detection accuracy in complex outdoor environments.

Aiming at these problems, a method for dynamic obstacle detection using multi-laser sensors is proposed. The 3D laser sensor is used to detect and track the obstacles around the driverless car, and the Kalman filter is used to track the motion state of the obstacles. And prediction, for the fan-shaped area in front of the driverless car that requires high accuracy, the confidence distance theory is used to fuse the data of the four-line laser sensor to determine the motion information of the obstacle, which improves the detection accuracy of the motion state of the obstacle. The grid map not only distinguishes the dynamic and static obstacles around the driverless car, but also delays the position of the dynamic obstacles according to the fusion results to eliminate the positional deviation caused by the sensor processing data delay.

Firstly, the Veloadyne data is rasterized to obtain an obstacle occupancy grid map. The dynamic information of obstacles can be obtained by clustering and tracking the grid map at different times, and the dynamic obstacles can be deleted from the grid map. Stored in the dynamic obstacle list, this grid map with the dynamic obstacle occupancy information deleted is also a static obstacle grid map, and then the dynamic obstacle information in the dynamic obstacle list and the unmanned driving acquired by Ibeo are combined. The dynamic obstacle information in the area in front of the car is synchronously fused to obtain a new dynamic obstacle list, and finally the dynamic obstacles in the new list are merged into the static obstacle grid map to obtain a dynamic and static obstacle distinction mark raster map. The obstacle detection module analyzes and processes the data returned by various lidars, rasterizes the lidar data, and projects them into a 512*512 grid map to detect obstacles in the environment.

Finally, the multi-sensor information fusion and environment modeling module is to fuse the environmental information obtained by different sensors, establish a road model and finally represent it with a grid map. These environmental information include: identification information, road information, obstacle information and location information, etc.

The obtained environmental information signal is processed to obtain a grid map that dynamically marks the obstacle, so as to achieve the effect of obstacle avoidance. The method of combining Velodyne and Ibeo information to obtain the moving target state is compared with only Velodyne. The way the results are processed, the accuracy and stability of the detection results have been greatly improved.

Machine Vision Mechanism

Machine vision, also known as environmental perception, is the most important and complex part of driverless cars. The task of the environmental perception layer of the driverless vehicle is to reasonably configure the sensors for different traffic environments, fuse the environmental information obtained by different sensors, and build a model for the complex road environment. The environmental perception layer of the driverless system is divided into traffic sign recognition, lane line detection and recognition, vehicle detection, road edge detection, obstacle detection, multi-sensor information fusion and environmental modeling modules.

Sensors detect environmental information, but only arrange and store the detected physical quantities in an orderly manner. At this point the computer does not know what the physical meaning of the data is mapped to the real environment. Therefore, it is necessary to mine the data we care about from the detected data through appropriate algorithms and assign physical meanings, so as to achieve the purpose of perceiving the environment.

For example, when we look ahead when driving a vehicle, we can distinguish the lane line we are currently driving from the environment. In order for the machine to obtain the lane line information, the camera needs to obtain the image of the environment. The image itself does not have the physical meaning of being mapped to the real environment. At this time, it is necessary to find the part of the image that can be mapped to the real lane line from the image through an algorithm. Its lane line meaning.

There are many sensors for self-driving vehicles to perceive the environment. The commonly used sensors are cameras, laser scanners, millimeter-wave radars, and ultrasonic radars.

For different sensors, the perception algorithm used will be different, which is related to the mechanism of the sensor's perception of the environment. The ability of each sensor to perceive the environment and the influence of the environment are also different. For example, the camera has advantages in object recognition, but the distance information is relatively lacking, and the recognition algorithm based on it is also very obviously affected by the weather and light. Laser scanners and millimeter-wave radars can accurately measure the distance of objects, but they are far weaker than cameras in recognizing objects. The same sensor has different characteristics due to its different specifications. In order to take advantage of their respective sensors and make up for their deficiencies, sensor information fusion is the future trend. In fact, some component suppliers have already done this. For example, the combined perception module of camera and millimeter-wave radar developed by Delphi has been applied to mass-produced vehicles. Therefore, the system design combines multiple sensing modules to identify various environmental objects.