Make SMPTE 75% Bars with softness of 2 with a bit depth of 12 Unpacked. Resolution doesnt matter here.
If made on Linux the result in the Flame UI is fine, but the data wiretap is giving has artifacting (look at attachment).
￼Notice the artifacting at the color seams.
If i create this same file on SGI Flame 9.5.13 and access via the same tools the result is fine. If i then do a file archive of that file and load it into Linux machine, and access with the same tools, the result again is fine. We have not only tested this with our Quicktime technology, "Tether", but we have also done raw data dumps without any translation and expirienced the same results. I have tested this against multiple Linux machines, IBM and HP, Flint & Flame between several different locations. I have yet to test this with SGI 2007.
On SGI Flame 9.5.13 i have created 2 SMPTE 75% clips, one with Softness of 2 the other 4.
I exported each frame as a 16 Bit SGI image.
I then used my tool, Tether, to link to the stonefs clip on the mac. I brought the corresponding SGI image and tether clip into Combustion and did a difference matte between each with a 0 Tolerance setting. There was not even a 1 point difference in values on even a single pixel. Absolutely data identical. I then archive the 2 SMPTE clips to File and imported into Linux 2007.SP1. Ran same test, again not even a single pixel difference.
I then ran the same tests based off of SMPTE 75% clips that were generated from within Linux, these were also exported as 16bit SGI images for use in the difference. Here is were all the problems arise. Massive artifacting at color seams, but overall there is a luminance shift also. So not only are the resulting Linux clips being spit out from Wiretap with artifacts, but they dont even match the SGI Image export of themselves.
I have attached a zip that includes the 4 SGI Images (2 FlameSGI SMPTE and 2 FlameLINUX SMPTE) and a Flame File Archive of the corresponding stonefs clips.
It turns out this is an endianness issue. Linux data bit-order is little endian and SGI data bit-order is big endian. Your image conversion process is assuming big endian all the time -- which is why data from Linux is being misinterpreted.
To be clear: the data issuing via Wiretap (SGI-Linux) is raw data passed through without *any* changes. You'll just have to ensure that the image conversion process you're feeding it into is detecting the endianness of the incoming data and react accordingly.
I must admit, I am explaining stuff I don't really understand. But your developers should be able to make sense of this.
But what is confusing me is that clips created on SGI but transfered to Linux are fine. So if it is an endian issue why wouldn't any clip that lives on a Linux stonefs have that also? There is no way to tell if a clip was created on SGI when on a Linux stonefs or vice versa.
The endianness is visible in the format tag of the WireTapClipFormat:
"rgb" and "rgb_le" denote big and little-endian respectively. If different data comes out of different servers with the SAME format tag, then we have a bug. A simple binary diff will do the trick to verify.
Aside, image export is not really a valid test/comparison since most image formats impose an endianness regardless of platform.
"If you loaded the transferred clip to the desktop on Linux and processed a zero-value CC, *that* result would be screwy."
I loaded a Linux clip into SGI Flame 2007 and ran thru CC with no values, the resulting clip is still being reported as rgb_le with your tools and exhibits the same issues as the origin of this thread.