units precision preference vs. data entry precision
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report
I am working on a design currently in mm units, with a display precision level (general and angular) of 0.12345. I have noted several occasions where I have entered numerical data for a positional command, such as Move or Plane Offset, at an exact integer value with only (3) trailing zeros after the decimal point, and come back later to find that the actual measured distance using the Measure function is not showing as exact; e.g., entry of 6.000 mm might yield a value of 5.99995 mm when measured later.
My question is basically whether making an "incomplete" command data entry without all the decimal places filled with zeros allows the software to artifically round off the value in executing the command, or if there are other contributing variables in this (small) degree of fuzziness. I have noticed this before, basically wrote it off to sloppiness in my point selection, but I had a couple of cases in the last day or so that seemed pretty clear-cut.
Obviously, quibbling over 50 nanometers is "gettin' down there", but sometimes it's important.