More testing to remove notes and check repeatability of results.
It is more than just memory speed versus memory response time:
Overall memory response time rules over memory speed, but to achieve the higher response time a faster memory was purchased (2000Mhz CAS9) and then ran at a slower 1600Mhz to achieve the faster response time CAS 7. Presumably this approach can be taken further with detuning faster memory and creating a broader performance curve with improved memory response time and speed. Currently (2011) DDR3 2200Mhz CAS9 is the highest parameter available in 4GB modules; as higher rated memory becomes available C3D functions should improve considerably.
Seems like you would see a much better performance increase by changing your work flow and splitting up your data into smaller more manageable pieces.
Upgrading your ram and overclocking seems like a waste of time for the performance increases you are seeing.
"For context, this specific case is using a location-based model, there are 450+ surfaces, 77 corridors, 10 alignments, and many sampleLines. The workflow for a location-based C3D model is without doubt complicated and I am not going to test it to show how complicated or if it is even possible. "
I have never had to manage the enormous amount of data you have but, I would say yes, you should be using dref's and xref's. I don't see any other way with that much data and it is slowing your machine to a crawl.
Keep it seperated into smaller much more manageable peices. I am not aware of your project scope or what your goals are with it, so I can't really give any other suggestions.
I made a video of the C3D model http://youtu.be/vfZuGLgVMEc
can you watch it and I'd like to discuss if the shortcuts and xref can be used, if it seem possible then I will take a day and give it a try to see what happens - I suspect it will explode into complexity and there will be some issue with linking components together when they reside in two files.
Since you posted a private message requesting more clarity on what I meant by workflow, I thought I would provide a link to a blog post that explains what both David and I are getting at. The post is at:
If you do not split up your data (especially the size of your project), you will bog yourself down to the point of choking no matter what kind of hardware you have simply because the software is built for design and thus is VERY accurate requiring large memory computations. This is not a matter of who is working on the project, although that may have an impact on HOW you break the data, but to take advantage of the software's fullest potential.
Looking at hardware when your project setup best practices have not been reviewed is the reason your discussion may not being followed or commented on much.
appreciate your time,
Reviewing the civil4D blog the first thing I notice is that xrefs are discussed in the context of plots - this is a VDC model so there are no plots; C3D can remove the plot function as far as I am concerned. The outputs from this C3D model are the quantities csv reports, a surface model for field 4D visualization, and a surface model to test the total station survey and GPS machine control application/performance. Second, there is no linework, at least nothing that is going to affect performance enough to go through the trouble of an xref.
I am fairly certain my workflow is following the best practice. I tested shortcuts and did not find a significant benefit from their use; as you suggest in you blog I had the existing surface and the corridor alignments with profiles as shortcuts at one time. After discussion with some experienced C3D experts I decided not to use xrefs for similar reasons.
As of right now the C3D project model is complete and the computer worked fine until the sampleLines were built; I think some better RAM and restoring my PCIe harddrive would resolve this too. The only reason it matters for me is that I am simulating the process of handing off the C3D model to the contractor - so if I handed them this model with the sampleLines it would be very slow for them to manipulate the model for construction purposes.
I am interested in your suggestion to split the model into multiple models and I assume let the contractor then reassemble it. Since you have suggested it I am now compelled to test your approach but first I must learn exactly what you are saying and learn how to follow your process. I have several questions that will help me setup a test:
It seems that opening two or more instances of C3D at once is mandatory;
Your questions show a thoroughness in collecting your thoughts but also show that you haven't done enough personal research on DRefs and Civil 3D workflow. For example, you can't Dref corridors but you can xref them in and cut sample lines from the Xref'd data. You also can't make a change to a referenced object (i.e., Alignment). That is the point of referencing - it leaves the definition to the parent object.
There is much out there on the internet (take principles from the material and put it to test on your needs) from AU classes, to Autodesk documentation (although a little old and rusty), to blog postings, to other discussions on this group that providing answers to your questions before you have done this research will be most time consuming.
Please do some additional personal research and do some testing of blending the principles (you don't have plot sheets but you do have some form of output where-in you want the information to visually be represented in a format for the contractor to quickly understand the design) from what you have read and then post any questions you have at that point.
So, I put off trying the divide and conquer approach Josh and - after some research I see that apparently - everyone else is using for civil 3D. In the meantime I hassled OCZ to replace my RMA'd PCIe drive in less than a couple weeks+ and was participating in a separate forum thread focused on performance. A post there suggested scalelistedit and -purge " on the lowest child drawing and work your way up to the parent drawings"; this posts assumption by default that there are child drawing indicated the widespread usage of the divide and conquer approach. I followed this suggestion, removing all but 1:1 and 1':1' scales and purging 10 purgable regapps. In the process I closed and froze all the open layers (maybe a dozen or so out of many) - then opened just the specific one I am applying payItems. And, now the mouse lag and hesitation is gone that was preventing precisely placing the courser over a line. The immediate problem is solved. It looks like the 0 layer places a big drag on the system, though it is empty of primitives, a bunch of corridor associated styles and some surface attributes are set to the 0 layer. Layer 0 on - problem, layer 0 off - good performance for mouse. But, maybe it was the scales and 10 regapps.
The issue is solved partially since this does not address the sampleLines performance or the long runtimes that were the best I could obtain with optimized settings. But, the 1 minute runtimes are acceptable since they are only incurred for each sampleLine - there are 77 and it was only an issue for one site's alignment that had most of the sampleLines - and then most of the delay was just to initially open the alignment, after which the delay was much less for succeding sampleLine edits. I could have simply made the one alignment into two and again, problem solved. This leaves the corridor save runtime - it was nearly instant before the sampleLines were made, so this seems like 'workflow' - be certain everything you want to do with corridors is done before creating sample lines. The payItems can be assigned through the toolspace settings editor to avoid the corridor delay (I just thought of and tested that after suffering through all the corridors). Save the sampleLines for last, just before associating the QTO computeMaterials.
Also, I am using an older SSD that while a quality drive and one of the fastest available it is nowhere nearly as fast as the PCIe solid state drive that failed prior to the runtime tests. From the first post to this thread I maintain that a memory with a faster response and faster speed will improve the performance of C3D. I did some research and found that next year DDR4 memory will be available - and should help C3D performance - ending the past few years of hardware issues that the x64 move only partially resolved.
I will still test the divide and conquer approach in the next few weeks as time permits, simply so I have been thorough in testing, but otherwise I am completing the QTO and visualization, and moving on.
Therefore, from my perspective - a VDC model for quantities csv reports (works with some excel compilations), visualization through nwcout (works fine), and GPS survey and machine control (not tested but research shows it can be improved) without the need for plots and the associated, layouts, sections, tags, and text - the divide and conquer approach, with this machine, is overkill and now unnecessary. Large surfaces and point clouds are clearly good candidates for shortcuts but large scale dividing of the project is not necessary, unless there are collaborating parties where it is then required to have multiple dref and xref.
I have succeeded in hurdling over the hardware wall with brute force.
I've worked with multiple DataSCs and the performance is improved dramatically. I feel this is something that could be set up with simplicity and logically to help multiple users and speed up your main design files performance. It's all about balancing what you're doing. Going to the extreme starts to work against you on this one.
Ensuring the user is utilizing all RAM available by adjusting this manually does help, and is probably the single most helpful step that C3D and ACAD users can do to open avoid restrictive hardware.
I do like the comment on how we should focus on adjusting the C3D workflow 1st and foremost. I've had success keeping all my surfaces in 1 separte dataSC and all alignments in it's own file as well. Good point on how far you take this stuff. Opening and editing separte files for all aspects continuously... doesn't seem practical. I personally focus on removing large surfaces and my alignments to separate dataSC files, and my performance is improved.
We all need to remember that 70 sample lines is not realistic for most designs. You could have 3x that many and be grinded to a hault. I myself hate that C3D and ACAD can't be more stable and steamlined to focus on production and speed, instead of making sure any Jr. or dummy can open the software and "get by". I think managers even think anyone can draft / design, problem is you need to know how to use the software and setup projects. I'd say 50% of all users, don't know how to do this properly. (just a hunch).
Log into access your profile, ask and answer questions, share ideas and more. Haven't signed up yet? Register
Start with some of our most frequented solutions to get help installing your software.