Ever since learning of Decap, I have been testing the limits of what my computer is capable of for pointcloud processing. My work flow is as follows: I open a command line (as administrator) then open Windows Task Manager > Details and check for the conhost.exe with the larger memory set. I "Set Affinity" of that exe as well as the cmd.exe to a single thread. Then I replicate this task 7 more times (utilizing a different thread for each instance) as I have 8 threads available. Next I make sure all of my FLS files are in a single folder and I create 8 separate List control files essentially dividing my overall load by 8 (72 scan... List1 = Scans1-9, List 2 = Scans10-18, etc). Then I set up my syntax something like: decap.exe --importWithLicense "C:\ScanFolder" Project1 --controlFile "C:\ScanFolder\List1.txt" --decimation 1 Where I can copy this syntax once, then paste it into each command line and simply increment the number in each instance. Once all command lines are rolling my CPU ramps up to nearly 100% for the duration and I get the entire project's scans processed in no time flat. I have noticed this creates 8 separate instances of the AdskFAROConverter.exe program which is normally my bottleneck when using Recap. After all of the Decap's are complete I compile all of the Project Support folders into one folder (I call CompiledSupport), then I start Recap fresh. I create a new Project.rcp, then when it asks for the folder location to import, I reference the CompiledSupport folder and everything loads in very quickly, ready for registration. I will also note that once registered, whether Manual or Automatic, the Indexing process is completely gone... again, major time savings. I should also note I utilize a SSD for this procedure, as I'm sure a standard spinning hard drive would choke on all of this. And even with 8 instances running, the only bottleneck is my 3.3 GHz i7 CPU. My RAM never seems to top 12GB, and my SSD barely notices the effort. All of which I am OK with, just interesting to see the biggest bottleneck in everything has to be the AdskFAROConverter.exe operation's CPU load. I have found this is streamlining my overall process and saving me major hours of sit-and-wait time. While I recognize that this has already saved me a lot of time, I am curious if there is a way to integrate this method into Recap and make my life easier? Possibly by adding in certain options that the user has to adjust so that this doesn't default to "taking over" a machine without the user's consent (and keeps everyone happy). I would imagine it would be something like, "You have x CPU Threads available, how many would you like Recap to use?" So that way if a user has 8 threads available, but wants to keep working on something else while Recap grinds, they can set it to 6 (or 2, or 4, or whatever) and still have the remaining threads for other tasks. This would also open up major opportunities to improve registration times with hardware. A quick search of NewEgg shows there is an i9 processor with 18 Cores/ 36 Threads. If I knew I could work on other tasks, while importing 34 scans at once, utilizing 34 Threads directly in Recap (through the computer's programming, not my own "Set Affinity" efforts), I would buy that CPU in a heartbeat... and probably a new mobo with an M2 drive and some ridiculous RAM as well... but I'm veering off topic. TLDNR - Based on this user's tested and utilized methods, can Recap's programming be modified to multi-thread certain tasks such as Import and Indexing?
Show More