text.skipToContent text.skipToNavigation

Tecnologías de un sensor 3D

El re­gis­tro tri­di­men­sio­nal de ob­je­tos de­sem­pe­ña un papel cen­tral en la au­to­ma­ti­za­ción, ya que la si­guien­te en­ti­dad de pro­ce­sa­mien­to debe co­no­cer la po­si­ción, el ta­ma­ño y la forma. El ca­mino hacia una nube de pun­tos 3D se rea­li­za en va­rios pasos y se puede re­sol­ver con di­fe­ren­tes téc­ni­cas de me­di­ción.
 

Trian­gu­la­tion and Struc­tu­red Light

The trian­gu­la­tion tech­ni­que is a method of ob­tai­ning depth in­for­ma­tion. The illu­mi­na­tion sour­ce and the ca­me­ra have a de­fi­ned dis­tan­ce and are alig­ned to a com­mon point. This forms a trian­gle with a so-​called trian­gu­la­tion angle. This trian­gu­la­tion angle can be used to cal­cu­la­te the depth in­for­ma­tion. The grea­ter the angle, the bet­ter the depth in­for­ma­tion that can be ac­qui­red. The trian­gu­la­tion angle cau­ses illu­mi­na­ted ob­jects to cast sha­dows (sha­ding) or the ob­ject obs­cu­res the back­ground and is no lon­ger vi­si­ble to the ca­me­ra (obs­truc­tion). Depth in­for­ma­tion can only be out­put for areas that are neither sha­ded nor obs­truc­ted. A 3D sen­sor from wen­glor works with struc­tu­red light and trian­gu­la­tion. It con­sists of a light sour­ce and a ca­me­ra. The ca­me­ra and illu­mi­na­tion sour­ce are alig­ned to a point and form a trian­gle, known as trian­gu­la­tion. This allows depth in­for­ma­tion to be ob­tai­ned. A 3D point cloud can be crea­ted by pro­jec­ting dif­fe­rent pat­terns onto the ob­ject.
Struc­tu­red light is an illu­mi­na­tion tech­no­logy where the light crea­tes a known pat­tern, often grids or bars. The depth and sur­fa­ce in­for­ma­tion of the ob­jects can be de­tec­ted by the way in which the pat­terns are de­for­med. Struc­tu­red light is a mea­su­re­ment method with high-​precision re­so­lu­tions of less than 10 μm. This means that the fi­nest hair­li­ne cracks in ob­jects or the sma­llest struc­tu­res that are in­vi­si­ble to the human eye can be iden­ti­fied. 3D sen­sors often use pat­terns such as bi­nary ima­ges with de­sig­na­tions such as gray code pat­terns or phase ima­ges.
The gray code pat­tern con­sists of a se­quen­ce of stri­pes that are illu­mi­na­ted in light or dark and be­co­me in­crea­singly finer. By trac­king the in­ten­sity pro­gres­sion with a ca­me­ra, a pat­tern can be de­tec­ted and thus a depth range can be de­fi­ned. Phase ima­ges, on the other hand, are wave pat­terns in the form of sine waves pro­jec­ted onto an ob­ject. For exam­ple, a di­gi­tal mi­cro­mi­rror de­vi­ce can be used to ge­ne­ra­te the pat­terns. The phase of the wave is shif­ted from image to image. The phase se­quen­ce can be used to ob­tain depth in­for­ma­tion with the help of a ca­me­ra. 

Pas­si­ve Ste­reo

In this pro­ce­du­re, two ca­me­ras view the same ob­ject at an angle. The dis­tan­ce of a point can be de­ter­mi­ned by the dif­fe­rent vie­wing an­gles. The dif­fi­culty here is iden­tif­ying the same point with both ca­me­ras. This method is subop­ti­mal when vie­wing a low-​contrast sur­fa­ce such as a white wall, for exam­ple.

Ac­ti­ve Ste­reo

The struc­tu­re is the same as that of the pas­si­ve ste­reo. The only dif­fe­ren­ce is that a pat­tern (e.g. ran­domly dis­tri­bu­ted points) is pro­jec­ted onto the ob­ject. This makes it ea­sier to as­sign a point from both ca­me­ras.

Time of Flight

In this pro­ce­du­re, the dis­tan­ce bet­ween the ob­ject and the sen­sor is de­ter­mi­ned based on the transit time. The sen­sor emits light pul­ses that hit an ob­ject. The ob­ject re­flects these light pul­ses. The dis­tan­ce is de­ter­mi­ned de­pen­ding on the du­ra­tion of the re­flec­tion of the light pul­ses. This allows depth in­for­ma­tion such as struc­tu­res or dis­tan­ces of ob­jects to be de­ter­mi­ned.

Com­pa­ri­son of 3D tech­no­lo­gies

The Three-​Dimensional Na­tu­re of the 3D Sen­sor

The 3D sen­sors pro­ject se­ve­ral pat­terns onto the ob­ject to be mea­su­red and then re­cord them by means of a ca­me­ra. The ob­ject is thus re­cor­ded in three di­men­sions and di­gi­ti­zed in a 3D point cloud. Neither the ob­ject nor the 3D sen­sor is in mo­tion. Ob­jects can the­re­fo­re be re­cor­ded quickly and ex­tre­mely pre­ci­sely.

​​​​​​​1) High-​resolution ca­me­ra
2) Light en­gi­ne
3) X, Y = mea­su­ring range
4) Z = wor­king range

3D Ob­ject Mea­su­re­ment Sim­pli­fies Au­to­mo­bi­le Pro­duc­tion

Illu­mi­na­tion: Light En­gi­nes for Ideal Illu­mi­na­tion

The illu­mi­na­tion sour­ce can be a laser or an LED. La­sers ge­ne­ra­te light with a high de­gree of tem­po­ral and spa­tial co­he­ren­ce. The spec­trum is na­rrow­band. The light ge­ne­ra­ted by a laser can be brought into a spe­ci­fic shape via op­tics. Another type of illu­mi­na­tion is the use of an LED. In con­trast to a laser, this pro­du­ces broad­band light and has hardly any co­he­ren­ce. LEDs are ea­sier to hand­le and ge­ne­ra­te more wa­ve­lengths com­pa­red to laser dio­des. Every sam­ple can be ge­ne­ra­ted using di­gi­tal light pro­ces­sing (DLP) tech­no­logy. The com­bi­na­tion of LED and DLP of­fers the abi­lity to crea­te dif­fe­rent pat­terns quickly and ef­fec­ti­vely, ma­king them op­ti­mal for struc­tu­red light 3D tech­no­logy. 

Image Re­cor­ding: Per­fect Pic­tu­re with CMOS Power

The ob­ject is re­cor­ded in two di­men­sions using a high-​resolution ca­me­ra. Mo­dern ca­me­ras ty­pi­cally have a pho­to­sen­si­ti­ve se­mi­con­duc­tor chip based on CMOS or CCD tech­no­logy, with CMOS tech­no­logy being used more fre­quently. A chip con­sists of many in­di­vi­dual cells (pi­xels). Mo­dern chips have se­ve­ral mi­llion pi­xels, allo­wing two-​dimensional de­tec­tion of the ob­ject. Due to the bet­ter per­for­man­ce of CMOS tech­no­logy, it is used in 3D sen­sors.

Nuage de points 3D : De l’ap­pli­ca­tion à l’image fi­na­le

La sé­quen­ce de mo­tifs de la lumière struc­tu­rée est cap­tu­rée par la ca­mé­ra. La co­llec­tion qui con­tient tou­tes les ima­ges est ap­pe­lé pile d’ima­ges. L’in­for­ma­tion de pro­fon­deur de cha­que point (pixel) peut être dé­ter­mi­née à par­tir des ima­ges de cha­que motif. Comme la ca­mé­ra a plu­sieurs mi­llions de pi­xels et dé­tec­te cha­que pixel en ni­veaux de gris, plu­sieurs mé­gaoc­tets de don­nées sont gé­né­rés en peu de temps. La quan­ti­té de don­nées peut être trai­tée sur un PC in­dus­triel per­for­mant ou en in­ter­ne dans le cap­teur avec un FPGA. L’avan­ta­ge du cal­cul in­ter­ne est la vi­tes­se, tan­dis que le cal­cul sur PC per­met une plus gran­de fle­xi­bi­li­té. Le ré­sul­tat du cal­cul est un nuage de points 3D.

In­te­gra­tion: From Sen­sor to Ap­pli­ca­tion

The 3D point cloud is cal­cu­la­ted from the cap­tu­red ima­ges. This can be done in the sen­sor or on an in­dus­trial PC. Soft­wa­re de­ve­lop­ment kits (SDK) from the ma­nu­fac­tu­rer or stan­dar­di­zed in­ter­fa­ces such as GigE Vi­sion are used for easy in­te­gra­tion. 

Use of Mo­no­chro­me Illu­mi­na­tion

The use of mo­no­chro­me illu­mi­na­tion makes it pos­si­ble to ef­fec­ti­vely sup­press dis­tur­bing in­fluen­ces from am­bient light th­rough op­ti­cal fil­ters. Illu­mi­na­tion can also be op­ti­mi­zed for ma­xi­mum ef­fi­ciency and illu­mi­na­tion in­ten­sity.

Comparación de productos