byLOONYLEN10-11-201309:00 AM - edited 10-11-201309:04 AM
I believe iPart/iAssemblies provide an essential role within Inventor and deserve a review. Some Inventor users rely heavily on iPart/iAssembly functionality, which also means that these models require regular updates. Just opening the factory and switching to a different member is seen as a modification that might lead to members being out of date. Even if there was an edit, (sometimes) not all factory changes effect all of the members. Currently, this means having to manually check out all of the members before being able to update them. Better support with respect to iParts/iAssemblies would be great, but forcing a checkout of all members isn't the solution.
The current workaround suggested by Autodesk is to place all of the generated components in a new file so everything can be updated at once. This is feasible for small iPart/iAssembly files with few variations, but not for larger assemblies having many different iParts/iAssemblies and respectively their members.
Instead, why not simply check out all of the children at the same time as the factory? You know they will need to be updated when the factory is altered, so why can't this be done automatically?
Not to mention that after regenerating all of the members of a factory, thousands of drawings inside the Vault now display a warning to anyone viewing the drawings that reads, "Warning, new data available". This warning has coworkers in different departments thinking that they are looking at "out of date" drawings, when they are not.
Let's say I've added a new "standard" part (new member) to a factory. The entire factory (all members) now need to be regenerated!?! Why? Nothing has changed, I've just added a new member and now thousands of drawings have a banner that reads, "Warning, new data available".
Can this member update be automated AND work together with Vault?
What about an automatic batch update process?
Also, we've often found that not all changes to the factory part will trigger inventor to actually consider the generated members to be out of date (requiring a save). Sometimes, when making a minor change (like adding an iMate) and generating new files, nothing happens. No new files are generated that show the change, regardless of being checked out or not. If I select generate files, it should generate the file. I should be able to see the time stamp of the file on the disk change to the current time and this does not happen in this case. It forces me to have to make a physical change to the factory (dirty the file) to trigger inventor and Vault to recognize that a change has been made.
Please correct these issues, as they are a major reason why Inventor software is so largely used.
Let's face it, the number one priority should be to create great products. The number two priority should be that the products can work together seamlessly.
We are in the situation that we have 5 ADMS today. We think that the number might be growing, when we add our foreign subsidiaries.
For maintenance purposes it would be nice to have an option to set an ADMS to maintenance mode and direct the users to a different ADMS for that period. We plan to do some experiments if that is possible with a proxy or DNS server redirection, but a native support through Vault would be welcomed.
There should be a relatively simple option in ADMS to truncate data from the vault, or to backup just the configuration.
Doing an implementation at a publicly traded company means going to a multi-stage process (dev, test, UAT, Prod) with multiple vault environments to verify the configuration before go-live. It would be very convenient to be able to either remove all of the files from the vault to provide a clean starting point for a new environment.
Alternately, the option to backup JUST the configuration would be a huge help as well.
If you've attempted to use the Refrence Repair Utility you'll quickly become frustrated it could really use some enhancements
In the settings I should be able to browse and select the Inventor Project File. The slash is opposite "/" to all other pathing in the settings and caused me some grief the first time I used this
I shouldn't have to close the utility when its done processing a folder. I should be able to pick a new folder and continue
I should be able to select multiple folders to process
It would be nice to be able to browse for folders to process
It can error and stop. As a script / batch processing utility it should just log the error and continue. There should be 0-interaction by the user until it is done
When it errors, it says to look at the log, but when you look at the log there is absolutely nothing about the error or what caused it to stop
I would assume that most people run the scan not on the server, apply the changes to the .xml not on the server, but then need to go to the server to import the fixes. It would be nice to be able to "push" the fixes from the same system used to scan and fix.
I use the command line to import the fixes but then need to use the ADMS Console logs to see any errors.... huh? The import needs to have its own GUI that writes its own log / error report
the goal is to run the job processor on many files that are spread accross several replicated servers.
the hurdle is that if you have the job processor logged into one server and it attempts to process a file that is currently owned by a different server, the task cannot be completed successfully. "error"...
we need to be able to allow for this in the job processor settings, whether it is to simply choose
1. "skip file that is not owned by your server", or
2. "run jobs on tasks with preference to one server over another". the job processor would be able to batch together a few hundred jobs for example, to complete on the local replicated server, then reboot itself (or log out and log into a different server) to run a few hundred files that are owned by the other server(s).
3. allow the job processor to run on files regardless of whether they are owned by another server, or
4. allow the option to choose: "if files are owned by a different server, then just run (several topions may be available here).
there may be easier ways to manage this but the job processor is going to be of much more use with some enhancement. (to be read in consideration with the several other requests for job processor improvements currently on the ideas station and discussion forumns).
Once a file\folder is replicated from one site to another, the challenge is to unreplicate.
Currently there appears to be 2 options to unreplicate a file.
1) In the ADMS Console, edit the "Replicated Folder" and untick the folder that you want to unreplicate, then delete the file(s) from the vault, re-add it to the "unreplicated" folder. This will prevent the file from being replicated in future
2) Disable the Vault on the site, delete the local filestore, Enable it. This can be dangerous if the file is not present on at least one other site. Its OK if you know all files are replicated to at least one site.
The wish is to untick a folder in the "Replicated Folder" dialogue for that site and for the ADMS Console to check to see if that file appears on another site before allowing the Admin to unreplicate the folder. Once the check is completed, the files in the selected folder are removed from that sites filestore.
If you think this would be a useful enhancement, add a comment to this thread...
in our operation we have seen some file / folder ownership management problems (inability to take ownership) seen between relicated servers. ie a remote server is down = no way to transfer ownership to other servers / sites.
i think there is the need to have some improved functionality on this topic and other general topics relating to the replicated environment.
is it sufficient to accept that if a remote server is down that there is no way ever to get ownership back of files back? despite very short or zero lease times etc etc.
i would have thought that the provider server would be able to force ownership back to itself or other server if the files were not checked out. i have searched to no avail & also have been advised by our VAR group that we cannot do this.
if that is the case then i feel that with some consideration of improved functional development that this would be possible or at the least have something along the lines of automatical ownership transfer configure to send ownership from remote sites after hours, back to the server who owns the files again before business resumes the next day. that way if a site does not boot up at the start of business, the other remote servers have the choice to take ownership if it is urgent.
this may not be the answer but with sensible discussion and development there may be solutions to the situations that could arise.
Example with document locks and without Check in/Check out:
I'm working on a fast network and data loading is not an issue.
Why do I need to Check out then?
I just want to (automatically) lock the documents I'm working on and I do not want the hassle of Check in/Check out.
So, when I hit the Save button everything is saved into Vault.
Example with Check in/Check out:
- When I'm going to work on site, I can just check out all documents I need and when I'm back I can check in all these documents.
- when I'm working at home trough a "slow" internet connection, I do not want all the traffic every time I save something, then I need to use check out at the beginning of my work and check in when I'm done.
This can be a user setting but also needs to be a admin setting to overrule users.