The change of the V channel satisfies the given cosine function and contains the phase information of the non-center point of the fringe. But after the pattern is modulated by the object, the originally stable signal changes. As a non-stationary signal processing method, the wavelet transform method has been introduced into many signal processing fields, including phase extraction from fringe patterns. Comprehensive comparison of one-dimensional and two-dimensional window Fourier transform, wavelet transform and other methods, for the consideration of reconstruction accuracy and speed, the one-dimensional wavelet transform method is selected. The generalized Morse wavelet has flexible time-frequency local characteristics and strict analysis, and the effect of measuring the 3D contour of the object is better than the popular complex Morlet wavelet as the mother wavelet

.
This commit is contained in:
Tang1705
2021-01-04 21:34:55 +08:00
parent ffc14d20af
commit d55c8aa732
2 changed files with 35 additions and 32 deletions

View File

@@ -46,7 +46,7 @@
<a href="./Readme-en.md">English Version</a>
</p>-->
</div>
## Basic Overview
## [👁️‍🗨️](https://emojipedia.org/eye-in-speech-bubble/) Overview
The wheel-rail attitude of high-speed rail reflects the complex dynamic interaction and restraint relationship between wheels and rails. Mastering the true contact attitude between them is an important foundation for ensuring the safe of high-speed railways. How to accurately obtain the wheel-track attitude of high-speed railways has always been a hot research field in domestic railway scientific research. However, it is imprecise and unreliable to obtain the wheel-rail contact attitude from 2D image. Extracting the feature points of the wheel-rail surface and reconstructing a 3D model can obtain the wheel-rail contact attitude more realistically and accurately.
@@ -71,48 +71,47 @@ The coded structured light method used in the project uses a certain pattern of
The 3D reconstruction technology of coded structured light method is mainly composed of five key technologies: system calibration, structured light coding, image acquisition, structured light decoding and three-dimensional coordinate calculation.
<div class="imgs" align="center" ><img src="https://5618.oss-cn-beijing.aliyuncs.com/wordpress/image/00/19.png" alt="19" width="80%" height="80%"/></div>
- System Calibration: The system consists of a camera, a projector and a computer. The goal of calibration is to calculate the internal parameter matrix and the lens distortion coefficient of the camera and the projector and the external parameter matrix of the relative position between the two.
- Structured Light Coding: The "identity" of each point of the pattern can be identified through coding.
- Image Capture: The projector projects the coded structured light pattern on the surface of the target, and the pattern will be distorted with the modulation of the surface shape of the object. What is captured by the camera is the structured light image modulated by the object. The modulated image reflects the three-dimensional information of the surface shape of the object.
- Structured Light Decoding: Decode the captured structured light image, the decoding method depends on the encoding method. The purpose is to establish the correspondence between the feature points of the camera plane and the projection plane.
- 3D Coordinate Calculation: Using the corresponding relationship between the feature points and the calibration results, the 3D information of the feature points are obtained based on the principle of triangulation.
The coding methods of structured light mainly include Time-multiplexing and Space Codification. Although Time-multiplexing has good reconstruction accuracy, because multiple pictures need to be projected to the surface of the object, Time-multiplexing is not a good choice for moving objects. Although the reconstruction accuracy of Space Codification is relatively not as good as time coding, However, since only one picture needs to be projected, it is often used for object reconstruction of dynamic objects.
The coded structured light mainly include Time-multiplexing and Space Codification. Although Time-multiplexing has good reconstruction accuracy, it is not a good choice for moving objects because of the need to project multiple pictures on the surface of the object. Compared with Time-multiplexing, Space Codification has a lower reconstruction accuracy, but because only one picture is projected, it is often used for object reconstruction of dynamic objects.
In summary, given the difficulty in the project that the surface of the wheel and rail is smooth and the feature points are not easy to extract, the feature points on the surface of the object can be artificially increased by projecting the coding pattern on the surface of the object. The spatial coding only needs a single projection, which is suitable for the reconstruction of dynamic objects. Therefore, this project mainly studies the method of spatial coding to obtain a relatively finer and high-density three-dimensional point cloud (point cloud, that is, a collection of feature points on the surface of the object. These points Contains information such as the three-dimensional coordinates and color of the surface of the object).
In summary, given the difficulty in the project that the surface of the wheel and rail is smooth and the feature points are not easy to extract, the feature points on the surface of the object can be artificially increased by projecting the coding pattern on the surface of the object. The Space Codification only needs a single projection, which is suitable for the reconstruction of dynamic objects. Therefore, this project mainly studies the method of Space Codification to obtain a relatively finer and high-density three-dimensional point cloud (point cloud, a collection of feature points on the surface of the object. These points Contains information such as the three-dimensional coordinates and color of the surface of the object).
In the early stage of the project, I have an in-depth understanding of the research content of the project. According to the research content, I have looked up relevant literature and learned about the current status of various technical research. I can understand the principles of the classic method and be able to realize it. On the basis of studying and understanding the research content and related principles, I have studied related theories and related papers based on De Bruijn sequence coding and pseudorandom matrix coding in spatial coding. Some papers are shown below.
## [📷](https://emojipedia.org/camera/) Algorithm
<div class="imgs" align="center" ><img src="http://static.zybuluo.com/TangWill/epo24nprhgogp1g0s38hiwy7/02.png" alt="03" width="30%" height="30%" /> <img src="http://static.zybuluo.com/TangWill/6q8x3x920nxdyluimu43gut1/03.png" alt="04" width="30%" height="30%" /> <img src="http://static.zybuluo.com/TangWill/z6giu9sqibq08fc1crweplxa/04.png" alt="05" width="30%" height="30%" /></div>
<table><tr><td width="500px"><div class="img" align="center"><img src="https://5618.oss-cn-beijing.aliyuncs.com/wordpress/image/00/20.png" alt="12" height="100%" width="100%"></div></td><td>The main innovations of the project are as follows:
<ul><li><b>Stripe center extraction with sub-pixel precision</b>: Designed and implemented the coded structured light pattern and the stripe center point extraction algorithm suitable for the pattern, and the stripe center point is accurate to the sub-pixel level</li>
<li><b>Increase point cloud density through wavelet transform</b>: An improved method of windowed Fourier transform for fringe phase analysis is proposed. The wavelet transform based on generalized Morse wavelet is used for analysis to obtain the phase information of non-central points. Increase point cloud density</li>
<li><b>Construction of a full-process 3D reconstruction platform</b>: The above algorithm and point cloud visualization are packaged into structured light 3D reconstruction software, which completes the 3D reconstruction of rails and multiple geometric bodies, which is expected to be used for wheel-rail posture reconstruction and visualization</li></ul></td></tr></table>
In the middle of the project, research, summarize, and organize the realization of the literature and classic algorithms read in the previous period. According to the research content of this project, it is planned to select applicable related algorithms and technologies, propose experimental ideas, and formulate experimental plans.
<table> <tr> <td>Pattern Creation</td><td width="600px">The pattern is designed in HSV color space. And the pattern consists on a colored sinusoidal fringe pattern
<ul>
<li> The H channel is coded in the 𝐵(3,4) sequence. The stripes are the basic elements of the coding pattern. Different values correspond to the three colors of red, blue, and green.</li>
<li> The S channel is set to 1 for all pixels.</li>
<li> The V channel is calculated by the sinusodial signal.</li>
</ul>
<p>There are 64 stripes in the coding pattern, and the stripe width is 14 𝑝𝑖𝑥𝑒𝑙, with the center point of the stripe as the feature point of the projection pattern</p>
♣ The 𝐷𝑒 𝐵𝑟𝑢𝑖𝑗𝑛 sequence is composed of 𝑛 different elements, and any consecutive subsequence whose length is 𝑚 only appears once
</td><td><div align="center"><img src="https://5618.oss-cn-beijing.aliyuncs.com/wordpress/image/00/21.png" alt="21" width="100%" height="100%"/></div></td></tr><tr><td> DeBruijn Analysis</td><td>在对灰度图像进行预处理后,为获得条纹中心点的位置,采用局部最大值算法从类似“高斯”形状的条纹灰度图像中提取图像每一行的局部最大值(以亚像素精度检测),局部最大值点即为条纹的中心点。在 𝐿𝑎𝑏 颜色空间下,对条纹中心点的颜色进行分类,在 4×1 的窗口中即可获得条纹中心点在投影图案的对应位置。After preprocessing the gray image, in order to extract the center point of the stripes, a local maximum algorithm is applied to searching local maxima (detected with sub-pixel precision) of each row of the image from the strips which is present a gaussian-like shape, The local maximum point is the center point of the fringe. In the 𝐿𝑎𝑏 color space, classify the colors of the center point of the stripe. In a 4×1 window, you can get the corresponding position of the center point of the stripe in the projection pattern.</td><td><div align="center"><img src="https://5618.oss-cn-beijing.aliyuncs.com/wordpress/image/00/22.png" alt="22" width="100%" height="100%"/></div></td></tr><tr><td>Wavelet Transform Analysis</td><td>The change of the V channel satisfies the given cosine function and contains the phase information of the non-center point of the fringe. But after the pattern is modulated by the object, the originally stable signal changes. As a non-stationary signal processing method, the wavelet transform method has been introduced into many signal processing fields, including phase extraction from fringe patterns. Comprehensive comparison of one-dimensional and two-dimensional window Fourier transform, wavelet transform and other methods, for the consideration of reconstruction accuracy and speed, the one-dimensional wavelet transform method is selected. The generalized Morse wavelet has flexible time-frequency local characteristics and strict analysis, and the effect of measuring the 3D contour of the object is better than the popular complex Morlet wavelet as the mother wavelet.</td><td><div align="center"><img src="https://5618.oss-cn-beijing.aliyuncs.com/wordpress/image/00/23.png" alt="23" width="100%" height="100%"/></div></td></tr></table>
The pseudo-random sequence has good window characteristics, that is, when moving over the code pattern through a small window, the code combination in each window is unique, and the characteristic points on the code pattern can be uniquely identified according to this characteristic of the window. Reproduce the original polynomial of the paper using <a href="https://www.codecogs.com/eqnedit.php?latex=h(x)=2x^6&plus;2x^5&plus;x^4&plus;3x^3&plus;2x^2&plus;2x&plus;1" target="_blank"><img src="https://latex.codecogs.com/gif.latex?h(x)=2x^6&plus;2x^5&plus;x^4&plus;3x^3&plus;2x^2&plus;2x&plus;1" title="h(x)=2x^6+2x^5+x^4+3x^3+2x^2+2x+1" /></a> , The diamond shape is the basic element of the structured light coding pattern, and the four colors of red, blue, green and black are used as the different values represented by the marked diamond. The window size is <a href="https://www.codecogs.com/eqnedit.php ?latex=2&space;\times&space;3" target="_blank"><img src="https://latex.codecogs.com/gif.latex?2&space;\times&space;3" title="2 \times 3" /></a> At the same time, using diamond-shaped corner points as feature points can effectively improve the accuracy of feature point extraction. According to the structured light decoding method proposed in the paper, the feature points on the surface of the object can be effectively extracted, but due to the small number of corner points, the effect of dense point cloud cannot be achieved. In the future, the camera resolution can be improved, the diamond area can be reduced, and feature points can be increased. Ways to increase the density of feature points. Some reference papers and experimental results are as follows.
<div class="imgs" align="center" ><img src="http://static.zybuluo.com/TangWill/z6giu9sqibq08fc1crweplxa/04.png" alt="06" width="22%" height="22%" /> <img src="http://static.zybuluo.com/TangWill/9jljaktk7e3qso61by60pohf/05.png" alt="07" width="22%" height="22%" /> <img src="http://static.zybuluo.com/TangWill/1wqxgxn4iwdicq1v9rs04e1x/06.png" alt="08" width="22%" height="22%" /><img src="http://static.zybuluo.com/TangWill/s3gah640whmsbfxoxzg2pqib/07.png" alt="09" width="22%" height="22%" /></div>
<p><a href="https://youtu.be/UfuwyE6MP0Q" rel="nofollow"><div class="imgs" align="center" ><img src="https://camo.githubusercontent.com/d6988f63060973271694ac180a5b09ff4c6410cc/687474703a2f2f7374617469632e7a7962756c756f2e636f6d2f54616e6757696c6c2f6f707035796e707263666f713570777567786363637674312f6d7034302e6a7067" alt="ScreenShot" data-canonical-src="http://static.zybuluo.com/TangWill/opp5ynprcfoq5pwugxcccvt1/mp40.jpg" width="75%" height="75%"/></div></a></p>
## [📽️](https://emojipedia.org/film-projector/) Performance
The De Bruijn sequence is composed of n different elements, and any continuous subsequence of length m appears only once. The reproduced paper is coded with the B(3,4) sequence. Stripes are the basic elements of the structured light coding pattern. The three colors of red, blue and green are used as the different values of the marking stripes. The window size is <a href="https://www.codecogs.com/eqnedit.php?latex=4&space;\times&space;1" target="_blank"><img src="https://latex.codecogs.com/gif.latex?4&space;\times&space;1" title="4 \times 1" /></a>, the center point of the stripe is used as the characteristic point. At the same time, in the HSV color space model, the V channel of the stripes is coded with a cosine function. In structured light decoding, in addition to extracting the fringe center point as a feature point, at the same time, windowed Fourier transform analysis is performed on the V channel of the captured image, and the density of feature points is increased by analyzing the phase obtained. According to the basic idea of the paper, the density of the point cloud can be increased while improving the progress of the point cloud extraction, and the experimental effect is better. According to the content of the project and the progress of related research, on the basis of research papers and experiments, the algorithm for reproducing papers is improved, and an algorithm flow suitable for the project scenario is proposed. Some papers are as follows.
<table> <tr align="center"> <td><div align="center"><img src="https://5618.oss-cn-beijing.aliyuncs.com/wordpress/image/00/24.gif" alt="24" width="100%" height="100%"/></div></td><td>
<div align="center"><img src="https://5618.oss-cn-beijing.aliyuncs.com/wordpress/image/00/25.gif" alt="25" width="100%" height="100%"/></div> </td><td> <div align="center"><img src="https://5618.oss-cn-beijing.aliyuncs.com/wordpress/image/00/26.gif" alt="26" width="100%" height="100%"/></div> </td></tr><tr align="center"><td>Sphere with a radius of 95mm<br>Point cloud data 17W+<br>Radius error 0.678mm<br>Calculation time 10-15s</td><td>Multi-object 3D reconstruction</td><td>3D reconstruction of rail surface</td></tr></table>
<div class="imgs" align="center" ><img src="http://static.zybuluo.com/TangWill/i9bvx3xay0c8v4040hjzulcv/08.png" alt="10" width="22%" height="22%" /> <img src="http://static.zybuluo.com/TangWill/wspx83923ujobnjysj4y2l3h/09.png" alt="11" width="22%" height="22%" /> <img src="http://static.zybuluo.com/TangWill/pflvpf9yzwkfw8csn0at2trl/10.png" alt="12" width="22%" height="22%" /><img src="http://static.zybuluo.com/TangWill/v0cdilgqhy1k4oqaw2dhdpgh/11.png" alt="13" width="22%" height="22%" /></div>
The surface of the sphere is reconstructed based on the improved algorithm, and about 17W point cloud data is obtained. The point cloud data is rendered in Meshlab. The experimental results are as follows.
<div class="imgs" align="center" ><img src="http://static.zybuluo.com/TangWill/etw2k42uqvbb63zyvbry0ki6/13.png" alt="14" width="75%" height="75%" /></div>
Use the B(4,3) sequence for coding. Stripes are the basic elements of structured light coding patterns. Four colors of red, blue, green and white are used as different values for the marking stripes. The window size is <a href="https ://www.codecogs.com/eqnedit.php?latex=3\times&space;1" target="_blank"><img src="https://latex.codecogs.com/gif.latex?3\times&space; 1" title="3\times 1" /></a>, using the center point of the stripe as the characteristic point. The surface of the sphere is also reconstructed to obtain about 20W point cloud data. The point cloud data is rendered in Meshlab. The experimental results are as follows.
## [💻](https://emojipedia.org/laptop/) Demo
<div class="imgs" align="center" ><img src="http://static.zybuluo.com/TangWill/8z4qxmk5467d6r7rpr1brpkg/14.png" alt="15" width="75%" height="75%" /></div>
The project was established in April 2019. Since the research, it has followed the research plan. It has studied, implemented, and optimized some existing 3D reconstruction methods based on spatially coded structured light, and proposed its own based on existing algorithms and project application scenarios. The solution and the software package has been completed, is expected to be used for 3D reconstruction of static and moving objects.
The main innovations of the project are as follows:
- **Stripe center extraction with sub-pixel precision**: Designed and implemented the coded structured light pattern and the stripe center point extraction algorithm suitable for the pattern, and the stripe center point is accurate to the sub-pixel level
- **Increase point cloud density through wavelet transform**: An improved method of windowed Fourier transform for fringe phase analysis is proposed. The wavelet transform based on generalized Morse wavelet is used for analysis to obtain the phase information of non-central points. Increase point cloud density
- **Construction of a full-process 3D reconstruction platform**: The above algorithm and point cloud visualization are packaged into structured light 3D reconstruction software, which completes the 3D reconstruction of rails and multiple geometric bodies, which is expected to be used for wheel-rail posture reconstruction and visualization
The software is shown as follows
<a style="color:black" href="./Exe/Reconstructionn.exe">The software</a> integrates the entire process of 3D reconstruction, and implements the three functions of system calibration, 3D reconstruction and point cloud rendering. The software uses C++ as the development language and the interface development is based on the QT framework, which relies on OpenCV and PCL (Point Cloud Library) for image and point cloud data processing. It adopts some design patterns such as singleton pattern and chain of responsibility pattern, etc. <img src="https://img.shields.io/badge/Demo- -%23FF0000?colorA=%23FF0000&colorB=%23FF0000&style=for-the-badge&logo=YouTube"/>
- UI of System Calibration
<div class="imgs" align="center" ><img src="http://static.zybuluo.com/TangWill/ml0iegb11jyr7t5iw1kp30ei/%E8%AE%A1%E7%AE%97%E6%9C%BA%E4%B8%8E%E4%BF%A1%E6%81%AF%E6%8A%80%E6%9C%AF%E5%AD%A6%E9%99%A2-%E5%9F%BA%E4%BA%8E%E7%BC%96%E7%A0%81%E7%BB%93%E6%9E%84%E5%85%89%E7%9A%84%E9%AB%98%E9%93%81%E8%BD%AE%E8%BD%A8%E5%A7%BF%E6%80%81%E4%B8%89%E7%BB%B4%E9%87%8D%E5%BB%BA-%E7%BB%93%E6%9E%84%E5%85%89%E4%B8%89%E7%BB%B4%E9%87%8D%E5%BB%BA%E8%BD%AF%E4%BB%B6%E2%80%94%E2%80%94%E7%B3%BB%E7%BB%9F%E6%A0%87%E5%AE%9A%E7%95%8C%E9%9D%A2.jpg" alt="16" width="75%" height="75%" /></div>
@@ -123,8 +122,14 @@ The software is shown as follows
- UI of point cloud rendering
<div class="imgs" align="center" ><img src="http://static.zybuluo.com/TangWill/ufqbnx21rnzkvfhmsyi2rosr/%E8%AE%A1%E7%AE%97%E6%9C%BA%E4%B8%8E%E4%BF%A1%E6%81%AF%E6%8A%80%E6%9C%AF%E5%AD%A6%E9%99%A2-%E5%9F%BA%E4%BA%8E%E7%BC%96%E7%A0%81%E7%BB%93%E6%9E%84%E5%85%89%E7%9A%84%E9%AB%98%E9%93%81%E8%BD%AE%E8%BD%A8%E5%A7%BF%E6%80%81%E4%B8%89%E7%BB%B4%E9%87%8D%E5%BB%BA-%E7%BB%93%E6%9E%84%E5%85%89%E4%B8%89%E7%BB%B4%E9%87%8D%E5%BB%BA%E8%BD%AF%E4%BB%B6%E2%80%94%E2%80%94%E7%82%B9%E4%BA%91%E6%B8%B2%E6%9F%93%E7%95%8C%E9%9D%A2.jpg" alt="18" width="75%" height="75%" /></div>
<p><a href="https://youtu.be/DM47pxDPks8" rel="nofollow"><div class="imgs" align="center" ><img src="https://camo.githubusercontent.com/21d5ee3679cc70eef161d9345e389816d1ffffbe/687474703a2f2f7374617469632e7a7962756c756f2e636f6d2f54616e6757696c6c2f6a6c3938626573756465656f396c75736879746d6f63616b2f6d7034312e6a7067" alt="ScreenShot" data-canonical-src="http://static.zybuluo.com/TangWill/jl98besudeeo9lushytmocak/mp41.jpg" width="75%" height="75%"/></div></a></p>
## [🔧](https://emojipedia.org/wrench/) Configuration
<table><tr align="center" style="background-color:#D9E2F3"><td width="500px">Hardware</td><td width="500px">Version</td></tr><tr align="center"><td>Point Grey Camera</td><td>——</td></tr><tr align="center"><td>LightCrafter4500</td><td>——</td></tr></table>
<table><tr align="center" style="background-color:#D9E2F3"><td width="500px">Software</td><td width="500px">Version</td></tr><tr align="center"><td>Windows</td><td>Windows 10</td></tr><tr align="center"><td>Visual Studio</td><td>2017</td></tr><tr align="center"><td>QT</td><td>5.12.3</td></tr><tr align="center"><td>OpenCV</td><td>4.2.03</td></tr><tr align="center"><td>FlyCapture2</td><td>2.12.3.2</td></tr><tr align="center"><td>PCL</td><td>1.8.1</td></tr><tr align="center"><td>VTK</td><td>8.0</td></tr></table>
<div><text style="color:red">Note</text>: Need to configure the environment variables of the computer and the properties of the project in Visual Studio (VC++ directory-include directory, VC++-library directory and linker-input-additional dependencies)</div>
## 📜 License
This software is made available under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
The code is made available under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).

View File

@@ -76,7 +76,9 @@
- 结构光编码:通过编码的方式使图像每一点的“身份”可以被识别;
- 图像获取:投影仪向物体投射编码结构光图案,图案会随物体表面形状的调制而发生畸变,摄像机拍摄被物体调制过的结构光图像,变形的图像反映了物体表面形状的三维信息;
- 结构光解码:对捕获的结构光图像进行解码,解码的方法取决于编码的方法,目的是建立相机平面和投影平面特征点之间的对应关系;
- 三维坐标计算:利用解码算法得出的特征点对应关系和系统标定结果,基于三角测量原理求出特征点的三维信息。结构光的编码方式主要有时间编码和空间编码两种。时间编码虽具有较好的重建精度,但由于需要向物体表面投射多张图片,所以对于运动物体来说时间编码的结构光重建不是一个好的选择。空间编码相较于时间编码重建精度较低,但由于只需投射一张图片,所以常常用于动态物体的物体重建。
- 三维坐标计算:利用解码算法得出的特征点对应关系和系统标定结果,基于三角测量原理求出特征点的三维信息。
结构光的编码方式主要有时间编码和空间编码两种。时间编码虽具有较好的重建精度,但由于需要向物体表面投射多张图片,所以对于运动物体来说时间编码的结构光重建不是一个好的选择。空间编码相较于时间编码重建精度较低,但由于只需投射一张图片,所以常常用于动态物体的物体重建。
综上所述,针对项目中轮轨表面光滑,特征点不易提取的难点,可以通过向物体表面投射编码图案,人为地增加物体表面的特征点。由于空间编码只需单次投影,适合对高速运动的高铁轮轨进行重建。因此,本项目主要研究通过空间编码结构光方法获得相对更高精度和高密度的三维点云(点云,即物体表面特征点的集合,这些点包含了物体表面的三维坐标及颜色等信息)。
@@ -103,12 +105,8 @@
<table> <tr align="center"> <td><div align="center"><img src="https://5618.oss-cn-beijing.aliyuncs.com/wordpress/image/00/24.gif" alt="24" width="100%" height="100%"/></div></td><td>
<div align="center"><img src="https://5618.oss-cn-beijing.aliyuncs.com/wordpress/image/00/25.gif" alt="25" width="100%" height="100%"/></div> </td><td> <div align="center"><img src="https://5618.oss-cn-beijing.aliyuncs.com/wordpress/image/00/26.gif" alt="26" width="100%" height="100%"/></div> </td></tr><tr align="center"><td>半径95mm的球体<br>表面点云17W+<br>半径误差0.678mm<br>运算时间10-15s</td><td>多物体三维重建</td><td>铁轨表面三维重建</td></tr></table>
<<<<<<< HEAD
=======
>>>>>>> e1c81855abba4799d914bf2d33a4f887083611a3
## [💻](https://emojipedia.org/laptop/) 软件展示
<a style="color:black" href="./Exe/Reconstructionn.exe">软件</a>集三维重建整个流程为一体,主要实现系统(相机与投影仪)标定、三维重建和点云渲染三个功能。软件以 C++ 作为开发语言并基于 QT 框架进行界面开发,依赖于 OpenCV 和 PCL (Point Cloud Library) 进行图像和点云数据处理。在开发上采用了单例模式、责任链模式等设计模式。 <img src="https://img.shields.io/badge/Demo- -%23FF0000?colorA=%23FF0000&colorB=%23FF0000&style=for-the-badge&logo=YouTube"/>