青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品

stereo geometry in OpenGL

what is stereoscopy?
 

Binocular vision

Most people see naturally through two eyes. The eyes are placed an average of 63.5mm apart - depending on genetic factors.  Two similar but different viewpoints generate a small amount of parallax error where the depth of the scene viewed is not equivalent to the point the eyes are converged at.  The computational power of the brain then takes over and facilitates depth perception.  There is a subgroup of the population who can only see out of one eye due to anatomical or physiological problems and cannot see stereoscopically.

Depth cues

Binocular parallax is a powerful depth cue in binocular individuals, but it is by no means the only one used by the mind.  A lot of depth cues are available to one eye and anything that can be seen as a depth cue in a standard photograph can be included in this category.

Motion parallax is a depth cue which can be used by the monocular individual or the cinematographer: this occurs due to a smaller amount of motion in a scene when an object is far away - for example in scenes from a moving train window.  Perspective and texture gradient allow depth perception by objects getting smaller as the distance from the user increases.  Fog and haze increasing with distance also contribute to depth perception.  Occlusion of objects by other objects which are nearer to the viewer are powerful depth cues that are available in normal scenes but are not applicable to radiographs.  Known objects also help with depth perception - where a object of known size (say, an elephant) appears in the scene, one can often make a guess as to how far away it is and compare it to other objects in the same scene.  Shading and lighting also plays an important role, especially if one knows where the light source is or if it is included in the scene.

Benefits of stereoscopy

A well presented stereoscopic image is pleasing to look at.  The depth of structures is readily apparent and the appreciation of details not observed before becomes evident.  The combination of two views can provide more useful information in a scene than a single view, or two views that are taken from widely disparate viewpoints.  Relative depth can be easily gauged and with proper measurement apparatus (as in aerial photography or radiostereogrammetrical analysis) absolute depth can be measured in a pair of images.

Problems in stereoscopy

A poorly presented stereoscopic image leaves no impression; or one of headaches, eyestrain and nausea with the unfortunate viewer.

Most of the bad stereoscopic experiences (of which there are many) are due to poor presentation.  It is not uncommon to see images that are presented with rotation or vertical parallax errors, and also not uncommon to see images with the views flipped (pseudostereo).  Geometry is also important and if this is changed (eg. shifting from small screen to large projection screen), eyestrain may be expected.

Accommodation-convergence mismatch is the problem that arises when the projected image plane is in focus but the eyes are converging at a different depth plane.  Humans have a tolerance for some mismatch but when this becomes too large, either the image defocusses or stereo fusion is lost.  This link between accommodation and convergence is hard-wired into our nervous systems but can be overcome to an extent by practice.  It is also easier to compensate for viewing close images (converging) as most people are not able to make their eyes diverge.

Ghosting is the presence of a small proportion of the image intended for the other eye being visible.  It is a problem for most of the computer based viewing systems where the image pathways are not kept physically separated.  Where high contrast images are used (such as when looking at metallic implants on a radiograph)  ghosting becomes a significant problem.  The solution has yet to be devised for computer viewers, but using a purely optical viewing system should not give any ghosting.

viewing systems

Free viewing

This is the cheapest method of viewing, but also requires the most practice and dedication.  Images can be viewed cross-eyed, or parallel.  Most people are unable to diverge their eyes and therefore cannot stereoscopically fuse images that are separated by a larger distance that their intraocular distance.  Cross-eyed viewing is easier for most people but also can be difficult due to the neurological linking of accommodation and convergence.  The main benefits of free viewing are that it can be done anywhere, anytime with no viewing aids and no image ghosting.

Optical aids


Included here are optical devices that permit the delivery of images to each eye with tolerable accomodation-convergance disparity.  The device can be as simple as handheld mirror or quite complex and bulky (some of the large mirror-based stereoscopes used for photogrammetry or old two-wall radiograph viewers).  I carry a small mini-Wheatstone viewer - essentially two small periscopes laid on side to spread one's effective intraocular distance - in my bag that can be used to quickly view images on any computer screen or on radiographs that are printed in parallel view format.

Wheatstone principle

One of the easiest stereoscopes to make is one using a single mirror.  2 computer screens, radiograph viewing boxes, or pictures are placed at 45o to each other with one image laterally reversed.  A carefully sized mirror is then placed on the plane between the two images - this superimposes the images optically in the right viewing position.  If at all possible, a front-silvered mirror should be used to eliminate ghosting. 

Mirror based stereoscope

Despite the ease of use of digital displays, I believe that if a dedicated user will gain the most information from an optical film viewing system at the moment.  This is for two reasons - the lack of ghosting in a well designed optical system and the wider dynamic range of transparency film as compared to prints or screen displays.

Liquid crystal shutterglasses

Liquid crystal shutterglasses are a neat device.  The glasses are formed of liquid crystal panels that alternately blank out in sync with the left and right images being displayed on the screen.  The ability to drive the glasses and alternate left/right images on the screen (page-flipping) results in a smooth flicker-free stereo appearance if the speed is high enough.  I find 120 Hz to be comfortable flicker-free stereo, but there are people who can comfortably look at 60 Hz stereo as well as those that need rates of 140 Hz+ for comfort.  Also, if shutterglasses are used in a room with florescent lights, the interference between the shutterglasses and the florescent tubes will cause major flicker.
 As long as the images aren't given a horrible amount of parallax, the accomodation-convergance discrepancy is largely dealt with.  Ghosting is also an issue with the shutterglasses, especially in high contrast regoins of the image.

Shutterglasses are a reasonably cheap (US$40-100+) solution.

Polarized light systems

IMAX has used these in projection systems for years with success.  Standard linearly polarized glasses have polarizers oriented at 135o for the left eye and 45o for the right eye.  Similar polarizer orientation is used for each of the projectors and a silver non-depolarizing projection screen needs to be used.  There are a number of other polarizing displays available such as the VREX micropol filter (horizontally interlaced alternate polarization) or the Z-Screen from Stereographics (active screen shutter with passive circular polarizing glasses).  The advantage of polarized systems is that they do allow multiple viewers easy access to the picture.  The drawbacks are ghosting which is still present and the need for users to wear eyeglasses.

Autostereoscopic displays

Autostereo is the ability to deliver separate images to each eye without requiring the use of viewing glasses.  At present, there are two main methods for achieving this - use of a barrier to block light destined for the contralateral eye, or use or a lenticular lens to direct light into the chosen eye.  Autostereoscopic displays are available from two to nine different viewing zones, but for each additional viewing zone, the effective resolution of the image is degraded.  The other drawback with autostereoscopic displays is the requirement for the user to be in a fairly well defined "sweet spot" or else the image will be displayed in mono or pseudostereo - head tracking devices can overcome this, but are cumbersome and expensive at present.  Both raster barrier and lenticular displays do suffer from ghosting which can be significant when high contrast images are used.

Displays are currently quite expensive and do have technical deficits, but these are being addressed by developers.  Solutions such as the DTI and Sharp screens use a switchable raster barrier and can be used for conventional mono work as well.

Where picture quality is important in an autostereoscopic screen, the SeeReal lenticular autostereoscopic screens offer the clearest picture with the least ghosting at the present time.

Emerging technology

Technology will continue to develop and there are some interesting ideas being worked on.  Holographic displays, elemental lens arrays and true volumetric displays are all being developed presently.  One of the most interesting developing technologies uses your retina as the primary projection screen - I'd still feel uncomfortable at having two projectors aiming for my eyes but can see the potential.

Anaglyph

The anaglyph is the use of color (usually red-left, cyan-right) to code for left and right images.  The images can be printed, displayed, or projected on just about any medium and a simple set of glasses with the appropriate lenses is used to direct the appropriate image to the appropriate eye.  "Retinal rivalry" is the conflict that arises in the brain by having the two different colors used for the same object.  Apparently, this does not bother a lot of people - I find anaglyphs unusable for any length of time.\

home

stereoscopic radiology

 

 
Stereoscopic radiology is the use of stereoscopic imaging principles on radiographs and volumetric data.

Roentgen described the first radiograph in 1895 and it was only a matter of 2-3 years before stereoradiographs were being taken.  A peak of popularity followed with most radiologists using the technique by the 1930's.  The discovery that x-rays could be harmful did a lot to kill off the technique as the extra radiation could not be justified.  Today there are few radiologists and even fewer clinicians who have been exposed to stereoscopy, much less use it.  I believe that a large part of this is the main mode of dissemination of knowledge in the medical world - by journal.  Stereoscopy has to be experienced first hand using a well set up viewing device to appreciate and publication in journal format without providing adequate viewing aids does not help the potential viewer.

There are a number of situations where plain stereoscopic radiographs may still be of significant benefit:
    - in practices (developing world, rural locations, military field hospitals) where CT scanning is not available, but more information is wanted
    - in situations where metallic implant components need to be imaged, but too much implant scatter occurs in the CT machine (older scanners)
    - dislocated hip or shoulder, where lateral is un-interpretable and often painful to obtain
    - erect spine - reconstructed CT and MRI data in the scoliotic, deformed, or unstable spine is not available in the erect position

Plain radiographs do need to be treated differently from photographic images when viewing for a number of different reasons.  Depth cues from perspective are preserved, though the obscuring of objects further away from the viewer are obviously not.  In photographic images, the focus plane is presented sharpest and objects progressively defocus away from this.  Radiographs are similar, though one needs to remember that the plane of sharpest focus is at the film plane and everything closer to the tube will be progressively defocused.  There are also no lighting or depth haze cues in radiographs to rely on.  With objects that occlude most of the x-rays from reaching the film, such as metalware or dense soft tissue, only a silhouette is recorded and the details that are available elsewhere in the radiograph are not visualized.

Technique

The most important part of taking a pair of stereoradiographs is to have the patient and film stay in the same location whilst the tube is shifted.
  Tube to film distance should remain constant for both films.  Most people who have written on the subject have recommended a tube shift of about 1/10th of the tube to film distance.  This can be a bit less for smaller subjects if the image will be magnified (hypostereo).  As the tube can be regarded as a point source of radiation, toeing in the tube should have no effect on the picture unless the fulcrum on which the tube swings is placed eccentric to the tube.  Use of a grid does cause a significant gradient that is visually obvious - we have not decided what to do with the grid yet.

Limitations and drawbacks

Before using radiographs stereoscopically, it is important to understand the limitations of stereoscopy.  Due to our inability to effectively gauge distance from the amount our eyes are accommodated and converged, it is not useful for assessing absolute depth in an image without stereophotogrammetric devices.  Changes in depth and depth relationships, however, are accurately judged.  

The main objection to stereoradiography is that for each stereo view, a double dose of radiation is required - where a stereo view does not add additional information, it cannot be justified.  If the technique does provide additional information that contributes positively towards clinical management, then it as reasonable to use as modalities like CT which also require additional radiation.  When considering radiation, it is useful to remember that the dosage from two AP or PA films of the trunk is lower than that of an AP + lateral series.  The two views required also require less irradiation than accepted investigative modalities such as plain film tomography, where multiple slices are made of a region.

Another problem in stereoradiography is the need for the patient to stay still whilst the two views are taken.  Whilst this is easy where filming tables are used, it is harder to get good films in erect patients, patients with neuromuscular disorders, studies which are dependant on respiration phase, and where patients are in considerable pain.

Volumetric data in stereo

Volumetric data can be rendered from two different viewpoints (either using "toed-in" or asymmetric frustum projection) to give stereoscopic views of a subject.  Rendering can be done either using surface generation algorithms or by mapping intensity/density to opacity.  Surface rendering algorithms were developed to speed up the rendering process by reducing the number of geometric primitives and for surface smoothing.   With the increasing speed of recent computers, it is becoming more feasible to render the full volumetric data set on widely available computing platforms.

I have written tutorials on the use of two programs - VolView and AMIDE - for use in volumetric rendering available: see the "opacity based rendering" page.

If you have developed or are developing other uses for stereoscopy in radiology or orthopaedics, I'd be interested to know.

orthopaedic applications

 

We use binocular vision every day in everyday life.  There are few surgeons who would prefer living - much less operating - with only one eye.  Binocular microscopes are in common usage in microvascular surgery, as are surgical loupes that give good stereoscopic vision.  Is there any reason that we should persist in viewing our imaging with one eye?  Do we continue listening to our music using monophonic gramophones?

The ability to perceive images tridimensionally can add to a overall understanding of a bony problem (the "personality" of the fracture or deformity).  Radiostereogrammetrical analysis (RSA) to measure prosthesis migration is currently the main use of stereoscopy in orthopaedics.  RSA devices are very accurate, often with accuracy of depth measurement in a stereo pair of 1mm or less.  Harnessing this ability to perceive depth in other clinical situations has a lot of potential.   When using stereoscopy the limitations of the technique must be kept in mind - it is an adjunct to currently available imaging modalities, not a replacement.

Stereoscopic endoscopy is an area which is being opened up by general, urological, and cardiothoracic surgeons.  There is potential use in orthopaedics, but due to current technical and cost limitations, we are not actively looking at development in this area at present.

Software for rendering volumetric CT and MRI data in stereo is available:  see the "opacity based rendering" page for details and a couple of quick tutorials on the subject.

Stereoscopic visuallization is a tool that is mainly useful in getting images with depth information as well as increasing the perceived resolution of the image by using both eyes.  Applications need only be limited by one's imagination.

opacity based rendering


Most people usually think of colour as a composite of Red, Green, and Blue; or Cyan, Magenta, Yellow.  Computer graphics cards that deal with tridimensional rendering also use a fourth component - Alpha, or opacity.  By mapping or converting the luminance of an image to an Alpha value, this opacity component can be used to reconstruct a virtual radiograph from CT data and viewed from any angle with or without stereo aids and the viewer can alter the viewpoint or opacity of the image in real time.

Display of reconstructed CT or MRI data has been done as a surface structure with lighting algorithms applied.  This has mainly been due to the fact that using a surface model decreases the amount of data that needs to be processed.  It does have drawbacks: One cannot "see-into" the volume, the surface generation algorithms smooth over regions than one may want to see, and the lighting algorithms also come at increased computational cost.  With the advances in computer technology, viewing the full volumetric data set can be done on a modest computer platform with a reasonable graphics card at home or on the move - this reduces the surgeon's dependency on the radiology department and their expensive Silicon Graphics workstations.

I have not uploaded any full volumes here, as it is all real patient data and will not be made freely available.  If you are interested and want some examples, email me with your professional details.

Opacity rendering tutorials

The two programs that I use are VolView from Kitware and AMIDE, an opensource development by Andy Loening.

Kitware has very kindly built in DICOM file support to VolView which makes it easy to open a 3D DICOM file or convert a stack of DICOM images into a single 3D file.

AMIDE is a useful program that allows volume rendering in parallel view stereo with alteration of the stereo parameters, as well as tools to remove extras such as plaster casts or CT tables.

To use either of the programs to look at CT data like a "virtual radiograph" follow the links below:

stereo geometry in OpenGL



OpenGL is a powerful cross-platform graphics API which has many benefits - specifically a wide range of hardware and software support and support for quad-buffered stereo rendering.  Quad-buffering is the ability to render into left and right front and back buffers independently.  The front left and front right buffers displaying the stereo images can be swapped in sync with shutterglasses while the back left and back right buffers are being updated - giving a smooth stereoscopic display.  It is also relatively easy to learn with a bit of application  - the code below was written inside of 3 months from the time I started programming due in no small part to the many resources available on the web.

When rendering in OpenGL, understanding the geometry behind what you want to achieve is essential.  Toed-in stereo is quick and easy but does have the side effect that a bit of keystone distortion creeps into both left and right views due to the difference between the rendering plane and the viewing plane.  This is not too much of a problem in central portions of a scene, but becomes significant at the screen edges.  Asymmetric frustum parallel projection (equivalent to lens-shift in photography) corrects for this keystone distortion and puts the rendering plane and viewing plane in the same orientation.  It is essential that when using the asymmetric frustum technique that the rendering geometry closely matches the geometry of the viewing system.  Failure to match rendering and viewing geometry results in a distorted image delivered to both eyes and can be more disturbing than the distortion from toed-in stereo.

Paul Bourke has an excellent site with examples of stereoscopic rendering in OpenGL.  If you are interested in creating stereo views in OpenGL, it is worth spending time working out the geometry of toed-in (which is quite easy but introduces distortion into the viewing system) and asymmetric frustum parallel axis projection for yourself.  Below is my method - familiarity with OpenGL, GLUT and C is assumed and you need to have a graphics card which is capable of quad-buffering:

Toed-in stereo

Toed-in geometry

The idea is to use gluLookAt to set the camera position and point it at the middle of the screen from the two eye positions:

//toed-in stereo

float depthZ = -10.0;                                      //depth of the object drawing

double fovy = 45;                                          //field of view in y-axis
double aspect = double(screenwidth)/double(screenheight);  //screen aspect ratio
double nearZ = 3.0;                                        //near clipping plane
double farZ = 30.0;                                        //far clipping plane
double screenZ = 10.0;                                     //screen projection plane
double IOD = 0.5;                                          //intraocular distance

void init(void)
{
  glViewport (0, 0, screenwidth, screenheight);            //sets drawing viewport
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();
  gluPerspective(fovy, aspect, nearZ, farZ);               //sets frustum using gluPerspective
  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity();
}
GLvoid display(GLvoid)
{
  glDrawBuffer(GL_BACK);                                   //draw into both back buffers
  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);      //clear color and depth buffers

  glDrawBuffer(GL_BACK_LEFT);                              //draw into back left buffer
  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity();                                        //reset modelview matrix
  gluLookAt(-IOD/2,                                        //set camera position  x=-IOD/2
            0.0,                                           //                     y=0.0
            0.0,                                           //                     z=0.0
            0.0,                                           //set camera "look at" x=0.0
            0.0,                                           //                     y=0.0
            screenZ,                                       //                     z=screenplane
            0.0,                                           //set camera up vector x=0.0
            1.0,                                           //                     y=1.0
            0.0);                                          //                     z=0.0
 

  glPushMatrix();
  {
    glTranslatef(0.0, 0.0, depthZ);                        //translate to screenplane
    drawscene();
  }
  glPopMatrix();


  glDrawBuffer(GL_BACK_RIGHT);                             //draw into back right buffer
  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity();                                        //reset modelview matrix
  gluLookAt(IOD/2, 0.0, 0.0, 0.0, 0.0, screenZ,            //as for left buffer with camera position at:
            0.0, 1.0, 0.0);                                //                     (IOD/2, 0.0, 0.0)

  glPushMatrix();
  {
    glTranslatef(0.0, 0.0, depthZ);                        //translate to screenplane
    drawscene();
  }
  glPopMatrix();

 
  glutSwapBuffers();
}


Asymmetric frustum parallel axis projection stereo

This is a bit more complex, as one needs to set up an asymmetric frustum first before moving the camera viewpoint.  In OpenGL, the asymmetric frustum is set up with the camera at the (0.0, 0.0, 0.0) position and then needs to be translated by IOD/2 to make sure that there is no parallax difference at the screen plane depth.  Geometry for the right viewing frustum is depicted below:

Asymmetric frustum geometry

To set up an asymmetric frustum, the main thing is deciding how much to shift the frustum by.  This is quite easy as long as we assume that we only want to move the camera by +/- IOD/2 along the X-axis.  From the geometry it is evident that the ratio of frustum shift to the near clipping plane is equal to the ratio of IOD/2 to the distance from the screenplane.

I decided to use a function call to set up the frustum on initiallization and anytime the viewport is changed:

#define DTR 0.0174532925

struct camera
{
    GLdouble leftfrustum;
    GLdouble rightfrustum;
    GLdouble bottomfrustum;
    GLdouble topfrustum;
    GLfloat modeltranslation;
} leftCam, rightCam;

float depthZ = -10.0;                                      //depth of the object drawing

double fovy = 45;                                          //field of view in y-axis
double aspect = double(screenwidth)/double(screenheight);  //screen aspect ratio
double nearZ = 3.0;                                        //near clipping plane
double farZ = 30.0;                                        //far clipping plane
double screenZ = 10.0;                                     //screen projection plane
double IOD = 0.5;                                          //intraocular distance

void setFrustum(void)
{
    double top = nearZ*tan(DTR*fovy/2);                    //sets top of frustum based on fovy and near clipping plane
    double right = aspect*top;                             //sets right of frustum based on aspect ratio
    double frustumshift = (IOD/2)*nearZ/screenZ;

    leftCam.topfrustum = top;
    leftCam.bottomfrustum = -top;
    leftCam.leftfrustum = -right + frustumshift;
    leftCam.rightfrustum = right + frustumshift;
    leftCam.modeltranslation = IOD/2;

    rightCam.topfrustum = top;
    rightCam.bottomfrustum = -top;
    rightCam.leftfrustum = -right - frustumshift;
    rightCam.rightfrustum = right - frustumshift;
    rightCam.modeltranslation = -IOD/2;
}
void init(void)
{
  glViewport (0, 0, screenwidth, screenheight);            //sets drawing viewport
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();

  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity();
}
GLvoid reshape(int w, int h)
{
    if (h==0)
    {
        h=1;                                               //prevent divide by 0
    }
    aspect=double(w)/double(h);
    glViewport(0, 0, w, h);
    setFrustum();
}
GLvoid display(GLvoid)
{
  glDrawBuffer(GL_BACK);                                   //draw into both back buffers
  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);      //clear color and depth buffers

  glDrawBuffer(GL_BACK_LEFT);                              //draw into back left buffer
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();                                        //reset projection matrix
  glFrustum(leftCam.leftfrustum, leftCam.rightfrustum,     //set left view frustum
            leftCam.bottomfrustum, leftCam.topfrustum,
            nearZ, farZ);
  glTranslatef(leftCam.modeltranslation, 0.0, 0.0);
       //translate to cancel parallax
  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity);
  glPushMatrix();
  {
    glTranslatef(0.0, 0.0, depthZ);                        //translate to screenplane
    drawscene();
  }
  glPopMatrix();


  glDrawBuffer(GL_BACK_RIGHT);                             //draw into back right buffer
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();                                        //reset projection matrix
  glFrustum(rightCam.leftfrustum, rightCam.rightfrustum,   //set left view frustum
            rightCam.bottomfrustum, rightCam.topfrustum,
            nearZ, farZ);
  glTranslatef(rightCam.modeltranslation, 0.0, 0.0);
      //translate to cancel parallax
  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity);


  glPushMatrix();
  {
    glTranslatef(0.0, 0.0, depthZ);                        //translate to screenplane
    drawscene();
  }
  glPopMatrix();

 
  glutSwapBuffers();
}

ghosting


 
Ghosting, or crosstalk, is a problem that is less evident in stereo movies and gaming due to motion of the scene and lower contrast levels as well as the presence of color in the images.  Medical images viewed in stereo are not forgiving if the viewing system has even a small level of ghosting.  This is because images are typically grayscale, static, have areas of very high contrast and also areas in which fine gradations of gray are used to differentiate structures.  The obvious and best solution for the problem is to use a optical viewing system which has no chance of ghosting.

In using any viewing system where light from the images is physically superimposed, a small percentage of the image destined for the contralateral eye does leak through the coding device, whether shutterglasses, polarized glasses, anaglyph glasses or an autostereoscopic screen.  With shutterglasses the ghosting arises from two components:  Firstly, the persistence of the image on the monitor phosphor persists for a bit longer after it has been switched off and this time lag allows some of the image to be still present at high enough levels to be perceptible when the contralateral shutter opens.  Secondly, even with the shutter closed, a small proportion of light still does leak through from the wrong image.

To eliminate perceptible ghosting, the amount of light leakage to the contralateral eye should be below 2% of what is being displayed to the that eye (the Weber fraction).

To make shutterglasses or autostereoscopic screens truly useful for radiology, the ghosting will have to be eliminated or minimized in future generations of stereoscopic equipment.

posted on 2007-05-03 20:11 zmj 閱讀(4517) 評論(1)  編輯 收藏 引用

評論

# re: stereo geometry in OpenGL 2008-10-07 20:47 Mr hu

為什么不讓別人加下這個群 my QQ number is 287977419  回復  更多評論   


只有注冊用戶登錄后才能發表評論。
網站導航: 博客園   IT新聞   BlogJava   博問   Chat2DB   管理


青青草原综合久久大伊人导航_色综合久久天天综合_日日噜噜夜夜狠狠久久丁香五月_热久久这里只有精品
  • <ins id="pjuwb"></ins>
    <blockquote id="pjuwb"><pre id="pjuwb"></pre></blockquote>
    <noscript id="pjuwb"></noscript>
          <sup id="pjuwb"><pre id="pjuwb"></pre></sup>
            <dd id="pjuwb"></dd>
            <abbr id="pjuwb"></abbr>
            av不卡在线观看| 国产精品高潮呻吟| 欧美激情一区二区三区蜜桃视频| 一本色道久久综合亚洲精品按摩| 亚洲高清不卡在线观看| 欧美日韩亚洲三区| 欧美片在线观看| 欧美日韩一区二区在线观看视频| 欧美日韩一区二区三区免费看| 欧美日韩在线观看一区二区三区| 欧美日本久久| 欧美日韩视频一区二区| 国产精品扒开腿爽爽爽视频| 国产精品日韩久久久| 国产亚洲成av人片在线观看桃| 国产综合激情| 亚洲乱码精品一二三四区日韩在线| 99热免费精品| 性欧美长视频| 久久综合九色综合久99| 亚洲电影自拍| 亚洲国产成人午夜在线一区| 中文国产亚洲喷潮| 久久久久网址| 欧美91福利在线观看| 欧美精品videossex性护士| 国产精品入口福利| 欧美日韩国产三级| 免费日韩av| 国产精品综合网站| 亚洲人成在线观看一区二区| 亚洲免费视频观看| 亚洲成在人线av| 亚洲线精品一区二区三区八戒| 欧美在线观看天堂一区二区三区| 美女亚洲精品| 国产精品天天看| 亚洲欧洲一区| 久久久久久久综合日本| 亚洲国产欧美不卡在线观看 | 激情综合网址| 亚洲免费av电影| 久久亚洲综合色一区二区三区| 9l国产精品久久久久麻豆| 久久久久国产一区二区三区四区| 欧美午夜精品久久久久免费视| 亚洲小说欧美另类社区| 一本久道久久综合中文字幕| 日韩一级网站| 久久久久久自在自线| 99国产欧美久久久精品| 看片网站欧美日韩| 国产欧美日韩专区发布| 野花国产精品入口| 欧美国产欧美亚州国产日韩mv天天看完整| 亚洲一区二区少妇| 欧美日韩精品免费在线观看视频| 亚洲高清资源| 老司机精品导航| 欧美一区二区三区电影在线观看| 欧美午夜久久| 亚洲中午字幕| 在线视频欧美日韩| 欧美午夜电影在线观看| 一区二区三区欧美日韩| 最新亚洲一区| 欧美精品在线观看| 99精品99| 一区二区三区精品| 欧美—级高清免费播放| 亚洲国产精品黑人久久久 | 久久国产精品久久久久久| 91久久中文| 欧美日韩三级一区二区| 日韩视频在线观看一区二区| 亚洲激情视频网| 免费在线欧美黄色| 日韩天堂av| 亚洲精品护士| 欧美精品久久久久久久久老牛影院| 一区二区在线不卡| 欧美岛国激情| 欧美激情 亚洲a∨综合| 亚洲国产精品成人精品| 亚洲成色精品| 欧美日韩妖精视频| 亚洲性xxxx| 欧美夜福利tv在线| 国产一区二区三区四区五区美女| 亚洲欧美日产图| 久久精品日韩欧美| 亚洲精品久久久久久久久久久久久| 91久久久在线| 国产精品私人影院| 欧美大片在线看免费观看| 欧美日韩国产成人在线| 久久国内精品视频| 狂野欧美一区| 亚洲欧美精品在线观看| 久久久精品2019中文字幕神马| 亚洲精品一区在线观看香蕉| 亚洲小说区图片区| 亚洲精品一品区二品区三品区| 一本综合久久| 亚洲成色999久久网站| 99精品99| 亚洲综合色自拍一区| 在线亚洲一区二区| 国产一区激情| 亚洲茄子视频| 国产在线不卡精品| 亚洲激情成人网| 国产精品五区| 亚洲国产欧美日韩| 国产一区二区三区高清播放| 亚洲国产va精品久久久不卡综合| 国产精品久久国产三级国电话系列| 久久在线视频| 国产美女精品一区二区三区| 亚洲激情一区二区| 亚洲电影免费观看高清完整版在线 | 亚洲国产精品久久久久秋霞蜜臀| 一区二区免费在线视频| 在线日本欧美| 羞羞色国产精品| 亚洲中无吗在线| 欧美日韩日本网| 亚洲黄色在线| 91久久夜色精品国产九色| 久久精品视频在线播放| 欧美在线免费观看视频| 欧美日韩视频不卡| 亚洲人www| 亚洲精品乱码| 欧美91精品| 91久久国产精品91久久性色| 亚洲国产天堂久久综合网| 欧美一区亚洲一区| 久久精品99国产精品酒店日本| 欧美视频一区二区| 一本久道久久综合婷婷鲸鱼| 亚洲天堂男人| 国产精品久久久久久久7电影 | 久久久国产亚洲精品| 国产麻豆9l精品三级站| 亚洲欧美激情一区二区| 欧美在线免费看| 国产曰批免费观看久久久| 欧美一区网站| 免费不卡在线视频| 亚洲第一网站免费视频| 玖玖精品视频| 亚洲日本激情| 亚洲一区二三| 国产片一区二区| 久久久久久久网| 亚洲高清久久| 亚洲色图自拍| 国产亚洲一本大道中文在线| 久久久久久国产精品mv| 亚洲二区视频| 亚洲在线视频免费观看| 国产欧美一区二区三区在线看蜜臀 | 伊人狠狠色j香婷婷综合| 久久精品国产999大香线蕉| 免费在线看成人av| 亚洲国产欧美精品| 欧美人与禽性xxxxx杂性| 亚洲毛片网站| 久久国内精品视频| 亚洲精品在线看| 国产精品一区久久久| 久久人人九九| 一区二区精品在线观看| 老司机午夜精品视频在线观看| 亚洲激情电影中文字幕| 欧美偷拍另类| 久久午夜色播影院免费高清| 亚洲精品免费一区二区三区| 欧美一区激情| 亚洲免费成人av电影| 国产亚洲一区二区三区| 欧美日韩一区在线视频| 久久精彩视频| 国产精品99久久久久久人| 欧美大片免费久久精品三p | 最新国产拍偷乱拍精品| 国产欧美日韩另类一区| 欧美成人乱码一区二区三区| 亚洲免费在线电影| 亚洲精品日日夜夜| 久久青草欧美一区二区三区| 一二三区精品福利视频| 加勒比av一区二区| 国产精品国产| 欧美二区在线| 久久精品五月| 小黄鸭精品aⅴ导航网站入口| 日韩亚洲欧美综合| 亚洲人成在线观看|