Message 1 of 2
MEL fopen problem: Internal file descriptor table is full?
Not applicable
08-25-2010
07:48 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report
I'm running into this issue and was wondering if anyone else had encountered it.
Basically I have bit of mel script that's batch processing a load of scene files and outputting data to text files. After around 180 files I get a warning saying "Internal file descriptor table is full" and the next time "fopen" is called after that warning it fails to open the file for writing.
I have tried to reproduce this by simply making a loop that "fopen"s, "fprint"s and "fclose"s but it didn't happen in that case even after 2000 loops. I think it must be related to the amount of data written but I've been unable to narrow it down more than that.
Does anyone know what exactly this "internal file descriptor table" is? I'm guessing it's either the software buffer that gets written to before fprint writes to disk (makes more sense if it's related to the volume of data written) or it's simply a list of file IDs that's getting to its limit.
I'm interested to know if anyone has a way to avoid this really. So any suggestions as to how I might avoid filling up this buffer or a way to flush it etc. would be much appreciated.
Thanks.
Basically I have bit of mel script that's batch processing a load of scene files and outputting data to text files. After around 180 files I get a warning saying "Internal file descriptor table is full" and the next time "fopen" is called after that warning it fails to open the file for writing.
I have tried to reproduce this by simply making a loop that "fopen"s, "fprint"s and "fclose"s but it didn't happen in that case even after 2000 loops. I think it must be related to the amount of data written but I've been unable to narrow it down more than that.
Does anyone know what exactly this "internal file descriptor table" is? I'm guessing it's either the software buffer that gets written to before fprint writes to disk (makes more sense if it's related to the volume of data written) or it's simply a list of file IDs that's getting to its limit.
I'm interested to know if anyone has a way to avoid this really. So any suggestions as to how I might avoid filling up this buffer or a way to flush it etc. would be much appreciated.
Thanks.
