青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品

Camera calibration With OpenCV

http://www.swarthmore.edu/NatSci/mzucker1/opencv-2.4.10-docs/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
http://www.pudn.com/Download/item/id/1006592.html
http://read.pudn.com/downloads214/sourcecode/graph/texture_mapping/1006592/FisheyeImageCalibration3.m__.htm
https://stackoverflow.com/questions/28483324/fisheye-lens-calibration-with-opencv-3-0-beta

Camera calibration With OpenCV

Cameras have been around for a long-long time. However, with the introduction of the cheap pinhole cameras in the late 20th century, they became a common occurrence in our everyday life. Unfortunately, this cheapness comes with its price: significant distortion. Luckily, these are constants and with a calibration and some remapping we can correct this. Furthermore, with calibration you may also determine the relation between the camera’s natural units (pixels) and the real world units (for example millimeters).

Theory

For the distortion OpenCV takes into account the radial and tangential factors. For the radial factor one uses the following formula:

x_{corrected} = x( 1 + k_1 r^2 + k_2 r^4 + k_3 r^6) \\ y_{corrected} = y( 1 + k_1 r^2 + k_2 r^4 + k_3 r^6)

So for an old pixel point at (x,y) coordinates in the input image, its position on the corrected output image will be (x_{corrected} y_{corrected}). The presence of the radial distortion manifests in form of the “barrel” or “fish-eye” effect.

Tangential distortion occurs because the image taking lenses are not perfectly parallel to the imaging plane. It can be corrected via the formulas:

x_{corrected} = x + [ 2p_1xy + p_2(r^2+2x^2)] \\ y_{corrected} = y + [ p_1(r^2+ 2y^2)+ 2p_2xy]

So we have five distortion parameters which in OpenCV are presented as one row matrix with 5 columns:

Distortion_{coefficients}=(k_1 \hspace{10pt} k_2 \hspace{10pt} p_1 \hspace{10pt} p_2 \hspace{10pt} k_3)

Now for the unit conversion we use the following formula:

\left [ \begin{matrix} x \\ y \\ w \end{matrix} \right ] = \left [ \begin{matrix} f_x & 0 & c_x \\ 0 & f_y & c_y \\ 0 & 0 & 1 \end{matrix} \right ] \left [ \begin{matrix} X \\ Y \\ Z \end{matrix} \right ]

Here the presence of w is explained by the use of homography coordinate system (and w=Z). The unknown parameters are f_x and f_y (camera focal lengths) and (c_x, c_y) which are the optical centers expressed in pixels coordinates. If for both axes a common focal length is used with a given a aspect ratio (usually 1), then f_y=f_x*a and in the upper formula we will have a single focal length f. The matrix containing these four parameters is referred to as the camera matrix. While the distortion coefficients are the same regardless of the camera resolutions used, these should be scaled along with the current resolution from the calibrated resolution.

The process of determining these two matrices is the calibration. Calculation of these parameters is done through basic geometrical equations. The equations used depend on the chosen calibrating objects. Currently OpenCV supports three types of objects for calibration:

  • Classical black-white chessboard
  • Symmetrical circle pattern
  • Asymmetrical circle pattern

Basically, you need to take snapshots of these patterns with your camera and let OpenCV find them. Each found pattern results in a new equation. To solve the equation you need at least a predetermined number of pattern snapshots to form a well-posed equation system. This number is higher for the chessboard pattern and less for the circle ones. For example, in theory the chessboard pattern requires at least two snapshots. However, in practice we have a good amount of noise present in our input images, so for good results you will probably need at least 10 good snapshots of the input pattern in different positions.

Goal

The sample application will:

  • Determine the distortion matrix
  • Determine the camera matrix
  • Take input from Camera, Video and Image file list
  • Read configuration from XML/YAML file
  • Save the results into XML/YAML file
  • Calculate re-projection error

Source code

You may also find the source code in the samples/cpp/tutorial_code/calib3d/camera_calibration/ folder of the OpenCV source library or download it from here. The program has a single argument: the name of its configuration file. If none is given then it will try to open the one named “default.xml”. Here's a sample configuration file in XML format. In the configuration file you may choose to use camera as an input, a video file or an image list. If you opt for the last one, you will need to create a configuration file where you enumerate the images to use. Here’s an example of this. The important part to remember is that the images need to be specified using the absolute path or the relative one from your application’s working directory. You may find all this in the samples directory mentioned above.

The application starts up with reading the settings from the configuration file. Although, this is an important part of it, it has nothing to do with the subject of this tutorial: camera calibration. Therefore, I’ve chosen not to post the code for that part here. Technical background on how to do this you can find in the File Input and Output using XML and YAML files tutorial.

Explanation

  1. Read the settings.

    Settings s; const string inputSettingsFile = argc > 1 ? argv[1] : "default.xml"; FileStorage fs(inputSettingsFile, FileStorage::READ); // Read the settings if (!fs.isOpened()) {       cout << "Could not open the configuration file: \"" << inputSettingsFile << "\"" << endl;       return -1; } fs["Settings"] >> s; fs.release();                                         // close Settings file  if (!s.goodInput) {       cout << "Invalid input detected. Application stopping. " << endl;       return -1; } 

    For this I’ve used simple OpenCV class input operation. After reading the file I’ve an additional post-processing function that checks validity of the input. Only if all inputs are good then goodInputvariable will be true.

  2. Get next input, if it fails or we have enough of them - calibrate. After this we have a big loop where we do the following operations: get the next image from the image list, camera or video file. If this fails or we have enough images then we run the calibration process. In case of image we step out of the loop and otherwise the remaining frames will be undistorted (if the option is set) via changing from DETECTION mode to the CALIBRATED one.

    for(int i = 0;;++i) {   Mat view;   bool blinkOutput = false;    view = s.nextImage();    //-----  If no more image, or got enough, then stop calibration and show result -------------   if( mode == CAPTURING && imagePoints.size() >= (unsigned)s.nrFrames )   {         if( runCalibrationAndSave(s, imageSize,  cameraMatrix, distCoeffs, imagePoints))               mode = CALIBRATED;         else               mode = DETECTION;   }   if(view.empty())          // If no more images then run calibration, save and stop loop.   {             if( imagePoints.size() > 0 )                   runCalibrationAndSave(s, imageSize,  cameraMatrix, distCoeffs, imagePoints);             break;   imageSize = view.size();  // Format input image.   if( s.flipVertical )    flip( view, view, 0 );   } 

    For some cameras we may need to flip the input image. Here we do this too.

  3. Find the pattern in the current input. The formation of the equations I mentioned above aims to finding major patterns in the input: in case of the chessboard this are corners of the squares and for the circles, well, the circles themselves. The position of these will form the result which will be written into the pointBuf vector.

    vector<Point2f> pointBuf;  bool found; switch( s.calibrationPattern ) // Find feature points on the input format { case Settings::CHESSBOARD:   found = findChessboardCorners( view, s.boardSize, pointBuf,   CV_CALIB_CB_ADAPTIVE_THRESH | CV_CALIB_CB_FAST_CHECK | CV_CALIB_CB_NORMALIZE_IMAGE);   break; case Settings::CIRCLES_GRID:   found = findCirclesGrid( view, s.boardSize, pointBuf );   break; case Settings::ASYMMETRIC_CIRCLES_GRID:   found = findCirclesGrid( view, s.boardSize, pointBuf, CALIB_CB_ASYMMETRIC_GRID );   break; } 

    Depending on the type of the input pattern you use either the findChessboardCorners or the findCirclesGrid function. For both of them you pass the current image and the size of the board and you’ll get the positions of the patterns. Furthermore, they return a boolean variable which states if the pattern was found in the input (we only need to take into account those images where this is true!).

    Then again in case of cameras we only take camera images when an input delay time is passed. This is done in order to allow user moving the chessboard around and getting different images. Similar images result in similar equations, and similar equations at the calibration step will form an ill-posed problem, so the calibration will fail. For square images the positions of the corners are only approximate. We may improve this by calling the cornerSubPix function. It will produce better calibration result. After this we add a valid inputs result to the imagePoints vector to collect all of the equations into a single container. Finally, for visualization feedback purposes we will draw the found points on the input image using findChessboardCorners function.

    if ( found)                // If done with success,   {       // improve the found corners' coordinate accuracy for chessboard         if( s.calibrationPattern == Settings::CHESSBOARD)         {             Mat viewGray;             cvtColor(view, viewGray, CV_BGR2GRAY);             cornerSubPix( viewGray, pointBuf, Size(11,11),               Size(-1,-1), TermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 30, 0.1 ));         }          if( mode == CAPTURING &&  // For camera only take new samples after delay time             (!s.inputCapture.isOpened() || clock() - prevTimestamp > s.delay*1e-3*CLOCKS_PER_SEC) )         {             imagePoints.push_back(pointBuf);             prevTimestamp = clock();             blinkOutput = s.inputCapture.isOpened();         }          // Draw the corners.         drawChessboardCorners( view, s.boardSize, Mat(pointBuf), found );   } 
  4. Show state and result to the user, plus command line control of the application. This part shows text output on the image.

    //----------------------------- Output Text ------------------------------------------------ string msg = (mode == CAPTURING) ? "100/100" :           mode == CALIBRATED ? "Calibrated" : "Press 'g' to start"; int baseLine = 0; Size textSize = getTextSize(msg, 1, 1, 1, &baseLine); Point textOrigin(view.cols - 2*textSize.width - 10, view.rows - 2*baseLine - 10);  if( mode == CAPTURING ) {   if(s.showUndistorsed)     msg = format( "%d/%d Undist", (int)imagePoints.size(), s.nrFrames );   else     msg = format( "%d/%d", (int)imagePoints.size(), s.nrFrames ); }  putText( view, msg, textOrigin, 1, 1, mode == CALIBRATED ?  GREEN : RED);  if( blinkOutput )    bitwise_not(view, view); 

    If we ran calibration and got camera’s matrix with the distortion coefficients we may want to correct the image using undistort function:

    //------------------------- Video capture  output  undistorted ------------------------------ if( mode == CALIBRATED && s.showUndistorsed ) {   Mat temp = view.clone();   undistort(temp, view, cameraMatrix, distCoeffs); } //------------------------------ Show image and check for input commands ------------------- imshow("Image View", view); 

    Then we wait for an input key and if this is u we toggle the distortion removal, if it is g we start again the detection process, and finally for the ESC key we quit the application:

    char key =  waitKey(s.inputCapture.isOpened() ? 50 : s.delay); if( key  == ESC_KEY )       break;  if( key == 'u' && mode == CALIBRATED )    s.showUndistorsed = !s.showUndistorsed;  if( s.inputCapture.isOpened() && key == 'g' ) {   mode = CAPTURING;   imagePoints.clear(); } 
  5. Show the distortion removal for the images too. When you work with an image list it is not possible to remove the distortion inside the loop. Therefore, you must do this after the loop. Taking advantage of this now I’ll expand the undistort function, which is in fact first calls initUndistortRectifyMap to find transformation matrices and then performs transformation using remap function. Because, after successful calibration map calculation needs to be done only once, by using this expanded form you may speed up your application:

    if( s.inputType == Settings::IMAGE_LIST && s.showUndistorsed ) {   Mat view, rview, map1, map2;   initUndistortRectifyMap(cameraMatrix, distCoeffs, Mat(),       getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, 1, imageSize, 0),       imageSize, CV_16SC2, map1, map2);    for(int i = 0; i < (int)s.imageList.size(); i++ )   {       view = imread(s.imageList[i], 1);       if(view.empty())           continue;       remap(view, rview, map1, map2, INTER_LINEAR);       imshow("Image View", rview);       char c = waitKey();       if( c  == ESC_KEY || c == 'q' || c == 'Q' )           break;   } } 

The calibration and save

Because the calibration needs to be done only once per camera, it makes sense to save it after a successful calibration. This way later on you can just load these values into your program. Due to this we first make the calibration, and if it succeeds we save the result into an OpenCV style XML or YAML file, depending on the extension you give in the configuration file.

Therefore in the first function we just split up these two processes. Because we want to save many of the calibration variables we’ll create these variables here and pass on both of them to the calibration and saving function. Again, I’ll not show the saving part as that has little in common with the calibration. Explore the source file in order to find out how and what:

bool runCalibrationAndSave(Settings& s, Size imageSize, Mat&  cameraMatrix, Mat& distCoeffs,vector<vector<Point2f> > imagePoints ) {  vector<Mat> rvecs, tvecs;  vector<float> reprojErrs;  double totalAvgErr = 0;   bool ok = runCalibration(s,imageSize, cameraMatrix, distCoeffs, imagePoints, rvecs, tvecs,                           reprojErrs, totalAvgErr);  cout << (ok ? "Calibration succeeded" : "Calibration failed")      << ". avg re projection error = "  << totalAvgErr ;   if( ok )   // save only if the calibration was done with success      saveCameraParams( s, imageSize, cameraMatrix, distCoeffs, rvecs ,tvecs, reprojErrs,                          imagePoints, totalAvgErr);  return ok; } 

We do the calibration with the help of the calibrateCamera function. It has the following parameters:

  • The object points. This is a vector of Point3f vector that for each input image describes how should the pattern look. If we have a planar pattern (like a chessboard) then we can simply set all Z coordinates to zero. This is a collection of the points where these important points are present. Because, we use a single pattern for all the input images we can calculate this just once and multiply it for all the other input views. We calculate the corner points with the calcBoardCornerPositions function as:

    void calcBoardCornerPositions(Size boardSize, float squareSize, vector<Point3f>& corners,                   Settings::Pattern patternType /*= Settings::CHESSBOARD*/) { corners.clear();  switch(patternType) { case Settings::CHESSBOARD: case Settings::CIRCLES_GRID:   for( int i = 0; i < boardSize.height; ++i )     for( int j = 0; j < boardSize.width; ++j )         corners.push_back(Point3f(float( j*squareSize ), float( i*squareSize ), 0));   break;  case Settings::ASYMMETRIC_CIRCLES_GRID:   for( int i = 0; i < boardSize.height; i++ )      for( int j = 0; j < boardSize.width; j++ )         corners.push_back(Point3f(float((2*j + i % 2)*squareSize), float(i*squareSize), 0));   break; } } 

    And then multiply it as:

    vector<vector<Point3f> > objectPoints(1); calcBoardCornerPositions(s.boardSize, s.squareSize, objectPoints[0], s.calibrationPattern); objectPoints.resize(imagePoints.size(),objectPoints[0]); 
  • The image points. This is a vector of Point2f vector which for each input image contains coordinates of the important points (corners for chessboard and centers of the circles for the circle pattern). We have already collected this from findChessboardCorners or findCirclesGrid function. We just need to pass it on.

  • The size of the image acquired from the camera, video file or the images.

  • The camera matrix. If we used the fixed aspect ratio option we need to set the f_x to zero:

    cameraMatrix = Mat::eye(3, 3, CV_64F); if( s.flag & CV_CALIB_FIX_ASPECT_RATIO )      cameraMatrix.at<double>(0,0) = 1.0; 
  • The distortion coefficient matrix. Initialize with zero.

    distCoeffs = Mat::zeros(8, 1, CV_64F); 
  • For all the views the function will calculate rotation and translation vectors which transform the object points (given in the model coordinate space) to the image points (given in the world coordinate space). The 7-th and 8-th parameters are the output vector of matrices containing in the i-th position the rotation and translation vector for the i-th object point to the i-th image point.

  • The final argument is the flag. You need to specify here options like fix the aspect ratio for the focal length, assume zero tangential distortion or to fix the principal point.

double rms = calibrateCamera(objectPoints, imagePoints, imageSize, cameraMatrix,                             distCoeffs, rvecs, tvecs, s.flag|CV_CALIB_FIX_K4|CV_CALIB_FIX_K5); 
  • The function returns the average re-projection error. This number gives a good estimation of precision of the found parameters. This should be as close to zero as possible. Given the intrinsic, distortion, rotation and translation matrices we may calculate the error for one view by using the projectPoints to first transform the object point to image point. Then we calculate the absolute norm between what we got with our transformation and the corner/circle finding algorithm. To find the average error we calculate the arithmetical mean of the errors calculated for all the calibration images.

    double computeReprojectionErrors( const vector<vector<Point3f> >& objectPoints,                           const vector<vector<Point2f> >& imagePoints,                           const vector<Mat>& rvecs, const vector<Mat>& tvecs,                           const Mat& cameraMatrix , const Mat& distCoeffs,                           vector<float>& perViewErrors) { vector<Point2f> imagePoints2; int i, totalPoints = 0; double totalErr = 0, err; perViewErrors.resize(objectPoints.size());  for( i = 0; i < (int)objectPoints.size(); ++i ) {   projectPoints( Mat(objectPoints[i]), rvecs[i], tvecs[i], cameraMatrix,  // project                                        distCoeffs, imagePoints2);   err = norm(Mat(imagePoints[i]), Mat(imagePoints2), CV_L2);              // difference    int n = (int)objectPoints[i].size();   perViewErrors[i] = (float) std::sqrt(err*err/n);                        // save for this view   totalErr        += err*err;                                             // sum it up   totalPoints     += n; }  return std::sqrt(totalErr/totalPoints);              // calculate the arithmetical mean } 

Results

Let there be this input chessboard pattern which has a size of 9 X 6. I’ve used an AXIS IP camera to create a couple of snapshots of the board and saved it into VID5 directory. I’ve put this inside the images/CameraCalibration folder of my working directory and created the following VID5.XML file that describes which images to use:

<?xml version="1.0"?> <opencv_storage> <images> images/CameraCalibration/VID5/xx1.jpg images/CameraCalibration/VID5/xx2.jpg images/CameraCalibration/VID5/xx3.jpg images/CameraCalibration/VID5/xx4.jpg images/CameraCalibration/VID5/xx5.jpg images/CameraCalibration/VID5/xx6.jpg images/CameraCalibration/VID5/xx7.jpg images/CameraCalibration/VID5/xx8.jpg </images> </opencv_storage> 

Then passed images/CameraCalibration/VID5/VID5.XML as an input in the configuration file. Here’s a chessboard pattern found during the runtime of the application:

A found chessboard

After applying the distortion removal we get:

Distortion removal for File List

The same works for this asymmetrical circle pattern by setting the input width to 4 and height to 11. This time I’ve used a live camera feed by specifying its ID (“1”) for the input. Here’s, how a detected pattern should look:

Asymmetrical circle detection

In both cases in the specified output XML/YAML file you’ll find the camera and distortion coefficients matrices:

<Camera_Matrix type_id="opencv-matrix"> <rows>3</rows> <cols>3</cols> <dt>d</dt> <data>  6.5746697944293521e+002 0. 3.1950000000000000e+002 0.  6.5746697944293521e+002 2.3950000000000000e+002 0. 0. 1.</data></Camera_Matrix> <Distortion_Coefficients type_id="opencv-matrix"> <rows>5</rows> <cols>1</cols> <dt>d</dt> <data>  -4.1802327176423804e-001 5.0715244063187526e-001 0. 0.  -5.7843597214487474e-001</data></Distortion_Coefficients> 

Add these values as constants to your program, call the initUndistortRectifyMap and the remap function to remove distortion and enjoy distortion free inputs for cheap and low quality cameras.

You may observe a runtime instance of this on the YouTube here.

posted on 2017-11-10 16:57 zmj 閱讀(1585) 評論(0)  編輯 收藏 引用


只有注冊用戶登錄后才能發表評論。
網站導航: 博客園   IT新聞   BlogJava   博問   Chat2DB   管理


青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品
  • <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>
            亚洲欧美另类国产| 久久久久久综合网天天| 欧美日韩在线不卡一区| 老司机成人网| 欧美成人精品在线视频| 欧美精品一区二区三区一线天视频 | 国产精品美女主播在线观看纯欲| 欧美va亚洲va国产综合| 欧美精品在线一区| 欧美日韩美女在线| 国产精品视屏| 亚洲国产毛片完整版| 一区二区三区福利| 欧美一区二区三区四区高清| 免费一级欧美在线大片| 亚洲国产日韩一区| 久久综合一区二区| 亚洲人午夜精品| 午夜精品福利在线观看| 久久精品国产99国产精品澳门| 久久综合电影一区| 欧美日韩亚洲一区二区三区四区 | 欧美福利电影在线观看| 欧美不卡视频一区发布| 欧美新色视频| 永久久久久久| 亚洲欧美不卡| 免费日韩av| 亚洲一二三四区| 免费一区视频| 国产亚洲a∨片在线观看| 亚洲精品乱码久久久久久日本蜜臀| 亚洲伊人观看| 欧美成人黄色小视频| 亚洲精品美女久久7777777| 亚洲欧美日韩国产成人| 欧美成人按摩| 在线播放豆国产99亚洲| 午夜精品理论片| 亚洲日韩欧美视频一区| 国产精品毛片在线| 欧美大片免费观看在线观看网站推荐| 亚洲激情精品| 久久成人综合网| 国产精品人人做人人爽| 狠狠色丁香婷综合久久| 午夜久久久久久| 亚洲乱亚洲高清| 欧美顶级少妇做爰| 亚洲国产一二三| 噜噜噜噜噜久久久久久91 | 午夜精品久久99蜜桃的功能介绍| 欧美精品色网| 久久综合网络一区二区| 午夜在线一区二区| 亚洲人成网站777色婷婷| 欧美一区午夜精品| 亚洲欧美国产制服动漫| 国产精品99一区二区| 日韩午夜免费| 亚洲福利视频一区二区| 久久亚洲综合色一区二区三区| 国产午夜精品视频免费不卡69堂| 亚洲摸下面视频| 中国成人亚色综合网站| 国产精品久久久久久久久久ktv| 亚洲一区二区视频在线观看| 99精品视频一区| 欧美午夜精品一区二区三区| 亚洲一区二区三区精品视频| 一区二区三区视频在线播放| 国产精品va在线播放我和闺蜜| 亚洲一区免费视频| 亚洲尤物在线| 黄色成人av网站| 欧美激情精品久久久久| 欧美成人国产一区二区| 一本久久综合| 亚洲午夜一二三区视频| 国产日韩欧美另类| 女女同性女同一区二区三区91| 狂野欧美一区| 日韩亚洲欧美一区| 欧美精品1区2区3区| 一区二区三区免费在线观看| 国产精品久久久久7777婷婷| 午夜久久资源| 久久久久九九视频| 日韩视频第一页| 亚洲综合色噜噜狠狠| 尤物精品国产第一福利三区| 亚洲欧洲一区二区天堂久久| 欧美日韩一区自拍| 欧美中文字幕在线观看| 麻豆视频一区二区| 午夜精品视频在线观看| 久久爱www久久做| 一本综合久久| 久久成人精品视频| 中文久久精品| 久久综合亚州| 欧美一级电影久久| 欧美成人黄色小视频| 欧美一级二区| 欧美麻豆久久久久久中文| 欧美一级免费视频| 欧美成人高清视频| 久久久久久9| 欧美视频网址| 亚洲福利视频网| 国产综合欧美| 亚洲一区二区三区色| 亚洲狼人综合| 久久天堂av综合合色| 欧美一区二区视频观看视频| 亚洲国产精品女人久久久| 国产欧美日韩精品专区| 亚洲精品美女在线观看| 一区在线免费观看| 午夜精品影院在线观看| 亚洲专区一区二区三区| 欧美乱在线观看| 亚洲国产高清一区| 亚洲二区精品| 久久夜色精品亚洲噜噜国产mv| 欧美在线免费视屏| 欧美午夜性色大片在线观看| 91久久久在线| 亚洲美女免费精品视频在线观看| 久久久久一区| 噜噜噜久久亚洲精品国产品小说| 国产日韩高清一区二区三区在线| 久久精品国产综合| 国产精品一二一区| 宅男噜噜噜66一区二区| 日韩一级在线观看| 欧美成人精品1314www| 欧美激情按摩在线| 亚洲欧洲精品一区二区三区不卡| 老色鬼久久亚洲一区二区| 裸体女人亚洲精品一区| 在线看国产一区| 免费日韩成人| 亚洲激情不卡| 一本色道久久加勒比88综合| 欧美激情综合色| 一本久道久久综合婷婷鲸鱼| 亚洲自拍偷拍色片视频| 国产精品久久久久久妇女6080| 中日韩男男gay无套| 欧美一区二区日韩| 一区二区在线观看av| 玖玖国产精品视频| 亚洲欧洲一区二区在线播放| 一区二区三区 在线观看视| 国产精品国产精品国产专区不蜜| 亚洲伊人伊色伊影伊综合网| 久久成人一区| 亚洲国内欧美| 欧美三级乱码| 性欧美超级视频| 欧美国产大片| 亚洲特级毛片| 黑人操亚洲美女惩罚| 欧美国产第一页| 亚洲一区成人| 正在播放日韩| 亚洲欧美日韩视频二区| 国产日韩av高清| 久久久之久亚州精品露出| 亚洲福利视频一区| 亚洲在线一区| 一区在线视频| 欧美三级第一页| 久久久综合网| 亚洲视频精选| 欧美激情第二页| 欧美亚洲在线观看| 亚洲电影成人| 国产精品久久久久免费a∨| 欧美一区午夜视频在线观看| 久久精品综合| 亚洲精品国产精品国自产观看浪潮| 欧美视频在线观看免费网址| 午夜免费在线观看精品视频| 欧美成人一品| 99精品热视频只有精品10| 欧美一区激情视频在线观看| 亚洲国产日日夜夜| 国产欧美日本一区二区三区| 免费亚洲婷婷| 欧美一区1区三区3区公司| 亚洲日本一区二区| 免费成人美女女| 亚洲在线一区二区三区| 亚洲伦理在线| 亚洲国产精品一区制服丝袜| 国产精品久久久久久久浪潮网站 | 亚洲欧美不卡| 亚洲乱码国产乱码精品精天堂|