Community
ReCap Forum
Welcome to Autodesk’s ReCap Forums. Share your knowledge, ask questions, and explore popular ReCap topics.
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Seriously Tricky Registration

2 REPLIES 2
SOLVED
Reply
Message 1 of 3
BuckWyckoff
777 Views, 2 Replies

Seriously Tricky Registration

I have a project with 17 scans along a roadway and woods.  My client places checkerboard targets on small tripods all along the shoulder and slope down to the woods and targets on a few trees.  This one has been a bear to register.  I've got 6 of the 17 tight, but re-registering the other 11 won't work.  Odd because on the second go-around I even got some Green Reg Refine feedback and I thought for sure a few of them were golden.

 

So talking to the client, we want to explore using survey points to improve the results.  He Total Stationed every target.  I see in Recap Help how to add survey markers.  What I don't find in the help is.....then what?  I'm hoping I can identify three common targets on each side of a reg panel, put in specific survey makers for each side, then register those together, essentially having Recap do an AutoCAD style 3D Align from the specific points I enter and brute force it together.

 

If I can do this and you can give me a little more info on it, or there is another method, please let me know.

 

It would be nice to nudge scans after registration.  Some are close.  It's would be cool to move a scan while the others are visible in Project View and be able to snap to a point on a scan, then holding that as the temporary rotation point, nudge discrete X, Y or Z axis rotation about that point to tweak it.  Okay, perhaps that's asking for too much control

 

This brings up a discussion about Recap Registration.  I thought what set it apart from Scene was that instead of finding geometry within scans and creating named geometry built upon those findings, to which the registration is then made, Recap is taking three target points I discretely identify and doing a 3D Align from those picks.  If any refinement to my picking of targets was going on, I thought it was finding the actual target feature (checkerboard center, doorframe corner, whatever) more precisely from my best-attempt pick and then ultimately doing a 3D align on those points.  But in the case of this job, I faithfully pick the center of the exact same three targets.  The initial target "happiness" feedback says green "Good" and the plan view looks aligned.  Then I refine it and the roadway is 60, 70, 80, 90 degrees off and the feedback is Zero across the board.  How can it bee that off when I identified the same three features?  I go back and identify three other identical features.  Sometimes, no matter what, it won't align.  It seems like in an environment that is devoid of nice edges (like deserted roadways instead of nice interiors) the target picks are thrown out the window as the software tries to find its own edges, violating the 3 like points in space that I identified in the first place.

 

To help me and my client better scan these tricky scenes, I'd like to better understand what the software is doing when registering.  What are the logical steps it goes through?  When you know what the software is doing, you can better feed it data that has a chance of success. 

2 REPLIES 2
Message 2 of 3
ryan.frenz
in reply to: BuckWyckoff

Hi Buck,

 

At the moment, the survey alignment feature does not improve or otherwise affect the registration process.  Rather, the collection of points you designate as 'survey points' are used after the fact to perform a best-fit alignment to the coordinate system defined by those points.

 

We have development work going on to allow using these points to affect the registration results - stay tuned for that.  But at the moment, this feature is strictly for putting your project on a different known coordinate system.

 

To your questions about registration - ReCap works by identifying surfaces and surface-like features in the new scan, then comparing them to the same in the active registration group.  The three point pairs you specify are used to compute a rough estimate only - this estimate helps the algorithms to more efficiently filter and reason about the overlapping surfaces.

 

So as a general rule, the success of a dataset will be based on the following (in order of importance):

  - Balance - this metric quantifies the 'diversity' of the direction of surfaces in the new scan.  Ideally, the scan will share surfaces in at least one vertical direction (ground or ceiling) and two or more horizontal directions (e.g. perpendicular walls, or tree trunks and a building).

  - Overlap - this measures the amount and density of the common surfaces.  The more the merrier, although with good balance this number can be pretty low and still get a good result.

  - Quality - this is a simple post-facto analysis of how tightly the overlapping surfaces ended up.  It usually only gets bad if there are very few overlapping points, for example if the only surfaces in the scan are very far away.

 

Sorry for the long post - hope this helps.

 

Ryan

Message 3 of 3
BuckWyckoff
in reply to: ryan.frenz

Thanks.

My clients environments do not lend themselves to conforming that those statistics.  But that's another matter.  In the meantime, I've come up with some workarounds and I look forward to improvements that make those workarounds obsolete.

Can't find what you're looking for? Ask the community or share your knowledge.

Post to forums  

Rail Community


Autodesk Design & Make Report