I have done some simulations on very basic models and I'm getting some funny results. If I take an open space, say 40m x 60m with no buildings, and run an external daylight analysis, basically to get the illuminance data file (.ill) for the year, the results say the daylight levels are higher in September than in December, for example, which is obviously not true. So I'm wondering where the program obtains its daylight data from. Is it a) actual collected illuminance data (which would therefore be variable due to the sky conditions on the day of data collection), b) calculated illuminance data from actual collected irradiance data, or c) other????
If you can help identify how they obtain the data that would be great.
This article may help answer your question:
I read that article however it doesn't really say where the values come from. This is what it says:
"If you do not specify the total horizontal illuminance of the sky, RADIANCE will automatically calculate it based on the current date, time and model latitude. "
What is this calculation based on? Is it based on solar radiation/irradiance data, with the luminous efficacy factored in? Such as in this comment:
"As illuminance levels are given in Lux, you need to convert them to an equivalent radiant energy value in W/m2. As 1 Lux = 1 Lumen/m2, and RADIANCE uses a default luminous efficacy for daylight of 179 Lumens/Watt, this is done by dividing the total horizontal illuminance value required by 179 to give W/m2."
Furthermore, I thought the .ill file provided the illuminance levels at each hour (or was it half hour) throughout the year. Is this correct? Can you offer any suggestions why the results of these .ill files would not be what I expect to get (The results showed higher light levels in spring than summer)? As mentioned I was simulating an open space, with no building or obstructions, such as an open field.
Log into access your profile, ask and answer questions, share ideas and more. Haven't signed up yet? Register