Thursday, December 29, 2016

The self-employed Ford – PanamaOn

It has already been three years since the first Ford Fusion (Mondeo) Hybrid research out to the street and this last version takes advantage of all learning and improvement. The new car uses the current platform of autonomous vehicle from Ford, but increases the processing capacity, thanks to new computer hardware. The electronic controls are approaching to the levels of production and the settings of the sensors, including the location, allow the car to enhance your vision on what surrounds him. The new sensors LIDAR, have a design more fluid and a field of vision that is more adjusted, which allows the car to use only two sensors instead of four, getting the same amount of information.

as previously mentioned, there are two key elements to creating an autonomous car; the platform of the autonomous vehicle, which to us is more than an improved version of a car, and the system of “driver” virtual. The new car advances in both fields, particularly in what refers to the development and testing of the system “driver-virtual”, which is a huge change in the capacity of collection and treatment of data. What we mean by “driver” virtual. To make a car fully autonomous, the SAE defines the vehicle level four (SAE-defined level 4-capable), which does not require the driver to take control and the car must be able to act as would a human being behind the wheel. The system of “driver’s” virtual Ford is designed to do just that and is composed by:

Sensors – LIDAR, cameras, and radar
Algorithms for localization and path selection
Vision, machine learning and
3D Maps, high-definition
Large computing power and information technology to manage these data.

to Build a car that will not be controlled by a driver human it is totally different to building a conventional car, and creates a whole series of questions for the engineering team of the autonomous car of Ford: How to replicate everything that a human drivers do behind the wheel in a car that drives by itself? A simple way to go to make a purchase requires the human drivers take a lot of decisions along the way. Have you taken the right path? What happens if an accident blocks the road? In the same way that we have confidence in ourselves or other drivers, it is necessary to develop a system of “driver” virtual with the same level of confidence in decision making, and apply them correctly on the fly. This is what is being done at Ford when we focus in to see, feel, think and act to our cars as a human, in fact, better than them in some cases.

Based on the most recent technologies, the engineers at Ford are working on developing two methods of perception for the driver ‘virtual’ of an autonomous vehicle: perception-treated and direct perception. The perception treated requires the generation of 3D maps of high resolution of the environment of the car. These maps include everything that the “driver” virtual known about the journey even before the car begins to move; location of stop signs, pedestrian crossings, traffic lights and other static elements. Once in place, the “driver” virtual use of LIDAR, radar and camera sensors to scan permanently all around the vehicle and compare what you see with the 3D map. This allows you to position the car accurately on the road, and to identify and understand what surrounds him. This treatment of perception also includes the fact that the system knows the rules of the road, so that you are prepared to abide by such rules.

The direct perception complements the perception treated using sensors to know the position of the car on the road, as well as mobile elements, how pedestrians, cyclists or other cars. The sensors can read signals, how can be a police officer directing traffic. Logically, the capacity for direct perception, it requires software systems and calculation still more sophisticated in order to identify and classify the large variety of possible mobile elements, especially pedestrians, on-the-fly. This mixed approach, which incorporates both the perception treated as the direct, will allow our “driver” virtual comply with tasks in the same way that a human and, potentially, better. To know what it is that’s supposed to transform a Ford Fusion (Mondeo) Hybrid conventional car into a fully self-contained, giving away the responsibilities of the “conductor virtual” in three tasks: sensing the environment, analyzing data to make decisions and control the vehicle .

Outwardly, the autonomous car research is unlike a Fusion (Mondeo) Hybrid conventional by the sensors. Are as the eyes and ears for a human being. Two sensors, LIDAR, each with a capacity to generate millions of do, sort of coming out from the pillars in front of the car, providing a 360-degree view. These new sensors have a range of approximately the length of two football fields around the car. The LIDAR high-definition is ready to see where the object is, its size, and that is what seems to be. There are also three cameras mounted on the roof bars. A camera oriented towards the front is located under the windshield. These cameras identify objects and read the traffic signs on the road. Radars of short and long range, ready to see in rain, fog or snow, increase the capacity of vision, helping to determine the movement of the object relative to the vehicle. The data of all these sensors will feed the “brain” of the autonomous vehicle, and the information is compared w ith the 3D map and other elements of vision computerized.

the brain of The autonomous car is in the trunk. There, the equivalent of several computers, high-capacity, generate 1 terabyte of information at the time, more than what a normal person would generate using your smartphone for 45 years. But what is that really supposed to take advantage of the platform of calculation, is the software developed by Ford for its “driver” virtual. There are so many variables that an autonomous vehicle has to process on the fly: what is it that surrounds you? What is what other drivers are doing? Where are you going? What is the best way? When you join a lane is there to speed up or slow down? What is it that you assume my decisions to the other cars? The sophisticated algorithms that the engineers write, process millions of data per Second, helping the autonomous car to react the way that it has been scheduled.

In the same way that the brain controls the muscles of the hands and feet when driving, the autonomous car, the decisions are transmitted through a network of electronic signals. This means to intervene in the software of the Ford Fusion (Mondeo) Hybrid, or even in your hardware, so that the electronic impulses from reaching the steering, brakes, throttle and transmission. To ensure that all electronic and mechanical systems work as expected, it takes a network like the nervous system of the human being. It is clear that these additional features, require additional energy and lots of it. A car with conventional internal-combustion does not have enough electrical energy to power an autonomous vehicle, so it is necessary to take energy from the batteries of high capacity of the Fusion (Mondeo) Hybrid, and in some cases not to be sufficient. Thus, the next-generation of autonomous cars for testing, they will have an additional generator.

This new car development takes Ford a little closer to its commitment to provide an autonomous car for 2021 for shared use of persons or goods. For now, the car has a steering wheel and pedals, items the car final will not have. In 2017, Ford will start testing autonomous cars are also on the european roads. In the near future, there is much to do. A large fleet of cars for test speeds up the rate of testing that is already being done on the roads of Michigan, Arizona and California. We look forward to expanding the fleet, tripling the number of cars, until you get to 90 this new year. We will begin to hear more and more about the user experience of an autonomous vehicle in shared use. For example, to know what to do if a user forgets a personal item in the car or left the door open.

Our engineers do not rest in their mission to develop a robust, capable and reliable “driver” virtual. The next generation of autonomous cars are a clear step forward which brings us closer to our vision of cars that drive themselves, in which our customers will circulate all over the world. The future is coming and we can’t wait.

LikeTweet

No comments:

Post a Comment