Lane Keeping Assistant with underlying video-based real-time application for edge detection under the use of contemporary technologies.

Student Carl Schwedes
Topic Lane Keeping Assistant with underlying video-based real-time application for edge detection under the use of contemporary technologies.
Time Period 13.01.2017 – 04.07.2017

Prof. Dr. rer. nat. Toralf Trautmann

Prof. Dr. Kai Bruns


    Diploma Thesis Download


  • Abstract

    This approach is basically focusing on the detection of road lane markings under time-critical and predefined real-time conditions. Several computer vision techniques are used to separate desired features from the input image with the purpose to provide a robust and fast analysing algorithm for lane keeping assistants.
    Inverse Perspective Mapping (IPM) is applied to easily achieve 2D-to-3D (Image-to-World) coordinate transformation to simplify pixel to real distance mapping. Lane marking like patterns in the input image are particularly searched by 2-dimensional gaussian convolutional kernel, which is specifically adjusted for this purpose. RANdom SAmple Consensus (RANSAC) algorithm in a polynomial extended form is further implemented to guarantee an outlier-free dataset of points which is combined with third-degree polynomial regression technique and least squares method.
    Lastly Kálmáns prediction and estimation filter is used for position tracking and correction(Kálmán-Filter) to keep the test vessel safely inside of the lane, to finally support work on fully autonomous and self-driving vehicles.

  • Hardware

    Raspberry Pi 3 Model B:
    CPU: 4×1.2GHz (OC 4×1.3GHz) 64-bit quad-core ARMv8
    GPU: 300MHz (3D core) / 400MHz (VideoCore IV subsystem)

    Operating System: Raspbian Jessie with Pixel (ARM-architecture)
    Kernel-Version: 4.4
    Real-time patch: PREEMPT_RT 4.4.9-rt17


    Camera Module ver.2.1/NOIR:
    Sensor: Sony IMX219 8-megapixel
    Video-Format: raw h.264 video format
    Resolution: 1080p30, 720p60, …
    FPS: 1/15/30/60/90 up-to 120
    Programmable: fully (V4L2 driver available)
    Module v2 Lens:
    FOV-H/V: 62.2°/48.8°
    Focal length: 3.04mm


    Intel NUC6i5SYH:
    CPU: Intel Core i5-6260U (dual core cpu) 2×1.9GHz (2×2.8GHz turbo-boost)
    GPU: Intel Iris Graphic 540Operating System: Ubuntu-Mint
    Kernel-Version: 4.4
    Real-time patch: PREEMPT_RT 4.4.53-rt66


    Camera Basler: acA-1300gm60
    Sensor: EV76C560 1.3-megapixel (CMOS)
    Video-Format: Mono 8/12, YUV 4:2:2 (YUYV 4:2:2)
    Resolution: 1280×1024
    FPS: 60fps
    Programmable: fully (pylon5 API)


    Edmund Optics Lens:
    Lens: CFFL F1.8 2/3″
    FOV: 41°
    Focal length: 12mm


  • Computer Vision

    • Inverse Perspective Mapping

    Inverse perspective mapping(IPM) is applied to remove the perspective effect of the camera of the input image to obtain an undistorted top-view where it is assumed that lane markings will appear as quasi parallel lines to each other to describe global real world coordinates with pixel-positions on an image.
    A 4 by 4 point correspondence is used to achieve the desired top-view, as it is illustrated in the middle one of the three pictures below.

    Further, a 3×3 homography matrix H can simply be determined through a linear system of 8 equations. The generated top-view has the advantage of a direct mapping of physical real-world units to pixel units without any further need of calculating other world- to imagecoordinate transformations.

    However, the accuracy is still hardly limited to the chosen resolution of the image and the physical units which are mapped to an individual imager element. In this application the accuracy is a compromise between quality, spoken in resolution and time of processing. Therefore, the resolution is chosen in such a way that 6cm of real-world units will be mapped to a single pixel.


    • Gaussian 2D Filter

    A gaussian filter is used to systematically extract specific information from images. The filter is constructed in its x-y separable form to achieve considerably savings on processing and also to undertake both edge detection and smoothing of the input image simultaneously. The filter in x-direction is the fourth-derivative of a 1-dimensional gaussian function which is highly sensitive reacting to intensity differences in vertical directions(interpreted as edges) of the input image. The y component is representing a standard 1-dimensional gaussian function which is responsible to blur out horizontally aligned intensity differences(interpreted as edges).

    The pattern of the so constructed filter can be seen in the image below. The narrow but also elongated shape of the filter is not just highlighting arbitrarilly intensity differences or edges in some input image but it is especially and systematically looking for vertically aligned line and stripe like patterns.

    The result of the filter process can be seen in the images below where only lane marking like patterns have been clearly highlighted and filtered by the edge detection process.


    • Histogram of Oriented Gradients – HOG

    To get an idea of which specific curvature is currently the most dominant one, HOG curvature descriptor is applied to the input image where the orientation of all of the gaussian-filtered pixels will be determined, with the help of the standard sobel operator in x- and y-direction. To actually build such a histogram of oriented gradients in this application the determined angles are accumulated into 17 equally sized bins of 10 degree ranges(right image below) of angles from 5°…175°(left image below).

    The found angular components are printed to see the appearance of several orientations in different scenarios this illustration, as it can be seen in the image below, is only for the purpose of demonstration and has no further meaning or functionality to the actual histogram of oriented gradients.

    For providing a parameter for global orientation, the appearance of each number of accumulated angular components per bin is used to lastly get calculated in a weighted curvature coefficient which is estimating the actual orientation of the current scenario. This weighted coefficient gets further tracked by the use of a 1-dimensional Kalman filter to keep track of the flow of orientation and to prevent sudden changes in orientation due to potential false detections.



    • Sum of Gaussians – SOG – Search for Dense Regions

    Sum of Gaussians is used for determining an approximation to a possible probability density function on base of discrete values to find out about the regions of most dominant and dense parts from the previously filtered image. The search of dense regions is basically undertaken by simply counting detected pixels in column-wise order of potential edges aligned in horizontal direction. The so provided discrete distribution of column values is further used to determine the number and place of occurance of potential lanes which have previously been detected by the gaussian filter. A sum of multiple of gaussian functions(left image) is lastly revealing the previously mentioned most dominant and dense parts of detected edge pixels of the current image(right image).


    • Clustering: Polynomial Extended RANSAC

    Clustering algorithms and parameter estimating procedures have been deeply examined to investigate on usability in contrast to this approach. The specific quite narrow but elongated shape of lane markings makes it very difficult to deal with with traditional clustering or parameter estimation attempts. Therefore a more particularly designed method is needed which can give appropriate results to the separation of such detected structures from the background, where lanes can also appear in multiple of numbers and orientations and also bended shapes to cover all possible ways such markings could possibly occur. The previously introduced descriptors for defining curvature (Histogram of Oriented Gradients – HOG) and density (Sum of Gaussians: Search for Dense Regions) are
    now used to support a faster and more systematically finding of lane marking-like structures in the filtered image. The principle of the clustering algorithm is basically illustrated in the image below.

    SOG and HOG is combined to give initially supportive parameters to the so called polynomial extended RANdom SAmple Consensus (polyExtRANSAC) algorithm which is lastly clustering the filtered set of data. A maximum of 4 lanes is defined which the algorithm can detect, because only nearby markings are important to ensure secure lane keeping, neighboured lanes can be additionally important which is also providing enough information for a potential extension of the entire algorithm towards a Lane Changing Assistant (LCA). The clustered result can be seen in the image below.


    • Higher-Order Polynomial Regression

    Polynomial regression helps to further refine the found clustering and to describe the lane model where the continuous flow of road lanes can suitably be interpreted with third-degree polynomial regression and least squares method. Third-order polynomials are used to interpret the found structure of lane markings.

    The important lane and also vehicle corresponding coefficients of

    will be determined by the use of least squares method where a polynomial equation is fitted to the points of an individual cluster of a lane which is lastly illustrated in the images below.


    • Lane-Model: Centre-Line

    The previously detected and separated set of lanes needs to be further investigated by centre
    line estimation and distance calculation to the currently present lanes. The fitted curves from the previous step are used to calculate an estimation of the centre line, because of inverse
    perspective mapping distance determination can directly be seen from taking pixel units without the necessity of further calculations. The centre line is also formed with the help of regression by simply taking the midpoints of the two nearby polynomial functions which are forming the set of points needs lastly to be given to the method of least squares.


    • Tracking of Polynomial Coefficients: Kalman-Filter

    The Kalman filter equations will be prepared to undertake estimation of third-degree polynomial coefficients to finally track the state of the developed centre line.

    The state-vector(the image is seen below) of the Kalman filter will be prepared by those previously mentioned lane to vehicle corresponding coefficients of M – change of curvature, kappa – curvature, sin( ) – yaw-angle and y0 – transverse direction of the vehicle which is resulting in a 4-dimensional Kalman filter for the tracking of polynomial coefficients.

    Furthermore, the transition matrix A is prepared with the derivatives of the previously introduced third-degree polynomial equation where x is set to v*Delta t where not just the change of the system can be calculated to every delta t from one update to another but it can also be predicted for states in the future.

  • Experimental Results

    The table below is showing the entire test-set of synthetic- and real-data samples where the developed approach for lane keeping has been tested with.

  • Conclusion

    The test set-up has been investigated on runtime with both devices the raspberry pi and the intel NUC computer, as it can be seen in the table under the section “Experimental Results”, to compare the achieved timings under consideration of the defined real-time capabilities of an update-cycle of at least 30fps. The raspberry pi as the low performance solution turned out very well where the framerate has not been seen under a value of 30 throughout the tests with all of the prepared data samples. The Intel NUC could achieve much faster timings of course as it was expected with the use of the high performance solution.
    The synthetic as well as the real data samples have been tested with the developed approach on quality of detection and tracking of lane markings. The simulated data turned out very well for detection and tracking where real data could only partly detect present lane-markings properly. The percentages of appropriate and successful detected lanes is shown in the table under section “Experimental Results” where the straight scenario had 100% of successfully detected markings withan appropriate tracking. SIMULATION 3 LANES and TESTFIELD scenario had already smaller problems to detect appropriate markings throughout the simulation due to the occurrence of curves where it is not fully guaranteed that lanes get successfully detected and tracked to every point of time. The approach could achieve very well detection and tracking with the simulation of the TESTFIELD scenario, detection remained very stable in general with very smooth tracking along the present markings. False detection and tracking has mostly been seen if the vehicle was leaving a curve, entering into curved scenarios has always been very well with very smooth transition from straight into curved regions. An adjustment of kalman parameters to the system could probably achieve better results with less false detections and a better compensation of suddenly appearing changes in the measurement to smooth out tracking.

    In generall, the provided approach for lane detection and tracking can ensure robust lane keeping under straight and slightly bended road scenarios with the presence of good visible lane-markings, the occurance of strongly bended curves and similar sections is still difficult to handle where no stable lane keeping functionality can be guaranteed.


Leave a Reply