Community
ReCap Forum
Welcome to Autodesk’s ReCap Forums. Share your knowledge, ask questions, and explore popular ReCap topics.
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Aw Snap! new project crashes

2 REPLIES 2
Reply
Message 1 of 3
BvC_Prod
406 Views, 2 Replies

Aw Snap! new project crashes

One thing that amazes me about ReCap Photo is how clearly, some unthinkably brilliant people have worked really hard to make it all work, and how at the same time this online program is still devoid of some basic "survival tools". A save button is an absolutely critical thing when creating a new project. I've lost so much time to the following scenario. I create a new project, upload images, which with my 5616x3748 imagery takes enough time to pull me away from the computer in the name of staying in motion. If I'm away too long, I come back and see my image grid with all red Xs, "image failed to upload", click "submit again", get the "Aw Snap!" page from Google. I'm already thinking there's an issue with users allowing a new project to sit idle posing a problem of getting timed out somehow, either by Chrome or ReCap Photo. My suspicions are only confirmed when, if I sit there and watch up to the last image uploading, I quickly intervene to get going with the advanced tools setting registration points. Because of the nature of my subject matter, highly irregular gypsum cave walls, and here again, the lack of simple "survival tools" like an adjustable magnifier, no quick way to bear down on defining common features, fighting a number of issues with the GUI (more on that later), it not only takes me way longer to work through registration, all this work is for naught when again, the "Aw Snap!" message rears its ugly head once, twice, and three times again. I can't even survive long enough to see what this new Smart Texturing feature brings to the table for so simple a feature as a save button. I've begged for this off and on for going on close to a year, and I beg again.

 

Sometimes I encounter behaviors that may or may not relate to what I'm feeling is a timing out issue, am attaching a couple screen captures to illustrate. I'll be manually stitching, suddenly I lose access to one of the images. If I return to the grid view, on rollover the enlarged view of the image appears, but then selecting it returns a black frame. In a second screen capture if I've planted a point in an image, loose that image in the same way as described, when I return to that image with the previously added point, I might see the icon defining the point against black, see attached. When I encounter these behaviors, and this has happened many many times, I'm usually tempted to preemptively submit an incomplete project, just to get something going, knowing that I'll have to submitt a new version with the next batch of changes. This is neither good for me or for the cloud server, as now the server takes a hit twice for the same work. In any event, when I submit, sometimes I've managed to have it take, and other times I end up with the "Aw Snap!" and have to start from scratch. I log into AD360, trash to folder of images, as there's no way to access already uploaded imagery from the ReCap Photo site, another "survival tool" that taxes resources both on the user end and the cloud server.

 

At the risk of bludgeoning, it's worth revisiting the functionality and design of the GUI when registering points. Just as a computer needs help via manuallly set points to stitch complex subject matter, the user's brain needs all the help it can get in providing those points. We're sometimes working between two monitors, the head and eyes switch back and forth, quickly trying to correlate a feature on a model in Maya with the corresponding feature in a photo in ReCap Photo. Even if the imagery was identical, the subject matter here is so incredibly irregular, under optimal conditions, i.e. in Lightroom where I can see and navigate imagery in its pristine state, I'm challenged to correlate common features. In Maya, where the texturing and ambient lighting render a very different look than the source imagery used to produce the textured mesh, my brain works especially hard to looking back and forth between Maya and Chrome/ReCap to first visually identify a good candidate for a locator in Maya/point in ReCap. I can see how accessing sets of large rasters over the internet, then magnifying parts of them, might stress resources, but it would appear the possibilities haven't really been explored. For instance, one suggestion I made some while ago was to allow any active point's magnifier within one or both panes to persist, which taxes the brain less to remember the precise location and quality of the feature being matched. I can't see how allowing previously displayed data to persist taxes any resources over the server.

 

The ability to resize the magnifier (and retain zooming), maybe using control/scroll, would also go a long way toward feeding the user's brain the most relevant cues. To break this down a little, when the user first takes in the whole image, after orienting to establish a patch of pixel real estate in one pane common to the second pane, the focus should then be on the patch. The present magnifier provides a patch, but relative to the remaining pixels in the full image, it appears to occupy less than 10% of the pane. If the user could control/scroll to resize the magnifier 10-80%, even at that upper range, which only leaves 20% of the full image exposed, the user's brain doesn't care because the focus is now on the patch inside the magnifier. Sure, especially at that size (and even at smaller sizes) a problem is encountered as the user attempts to place position the magnifier toward the edges of the image. We see that now, there being some routine running to deal with that. I'm sure there's a way to deal with that. Thinking of it, this could work; rather than present the full frame image in a pane at the current size, which eats up the entire pane, if the (full) image size were reduced, then the magnifier could spill over the edges. If the two panes had a divider that functioned to resize each pane, once the user got through planting a point in pane 1, magnifier persists, user drags the divider to the left, shrinking pane 1 and centering its magnifier, while pane 2 now enjoys more pixel real estate, user resizes magnifier in combination with zooming to located and set a matching point.

 

For any active point, as the user navigates from image to image, what's important to the user is information about that point in that image. So, if I've set point 1 in image 1, navigate to images 2-4 and need to return to point 1 in image 1 at some point, when point one is active via its selection in one of the other images, as image 1 is displayed, its magnifier is automatically displayed as well. This behavior allows the user to quickly compare a common feature at full zoom and relative to a broader swath of features, the combination of which allows the user's brain to more readily asses the quality and accuracy of any instance of a matched point.

 

Such added functionality, of course, comes at a price, am aware how difficult it must be to design, implement, and test changes to the GUI. The cost of making such changes, however, should be weighed against the cost of NOT doing so. If I'm forced to resubmit the same project how ever many times for reasons relating to my inability to adequately see the data, human feature detection, then for every pass the cloud server has to chug through camera orientation, and more significantly, bundle adjustment, that's a cost hit that surely needs to be considered in assessing the cost/benefit of improving the GUI. 

 

Ultimately, what I'm seeing and knowing is this; without a reasonably predictable workflow that allows me to produce a salable demo in a reasonable time period, how am I to responsibly invite a bonafide commercial project dependent on the present tool set? The wizardry in ReCap Photo is unquestioned. I urge its managers to consider balancing the focus toward more basic "survival tools" on upcoming upgrades. 

 

Thanks for the airtime!

Benjy

2 REPLIES 2
Message 2 of 3
vidanom
in reply to: BvC_Prod

Thank you Benjy,

 

We’ll make sure your voice gets to the product’s management.

 

Mitko

Message 3 of 3
BvC_Prod
in reply to: vidanom

Many thanks, Mitko.

Benjy

Sent from my iPad

Can't find what you're looking for? Ask the community or share your knowledge.

Post to forums  

Rail Community


 

Autodesk Design & Make Report