If so, could you explain it a little more? Also if these are really conveyors then why not model them as such?
Let's re-write this - but first tell us exactly what needs to happen.
Does each type get sent to one lane only at a time?
Once a lane is allocated a type then it receives only that type until it has 6 items?
Once you've sent 6 items you can then start to send a second different type to that lane even though it still contains the previous type?
Lanes don't need to flush between types?
Anything else?
if they are 10 different types of items flowing , each item should flow into 10 different lanes, but the 11th item if is of type 1 it should take the lane 1 or the lane it already occupied until it reaches the value of 6, after it reaches the vale of 6( max content), it could take another lane.
The reason I ask is I'm wondering about the case when you have 2 of Type1 and 4 of Type2 in a lane. Since it has 6 in total, can it then take Type3 or does it need a minimum of 5 of each type as a 'slug'?
luckily I was able to solve the issue, but yeah ofcourse you could have a look for suggestions as well in case I made any mistakes.
Here's a model that uses a Decision Point at the end of the conveyor to assign a queue to the arriving items. It uses maps and an array (as labels on the DP) to keep track of which type is currently assigned to what queue/port, how many items where already send there and which queues are still available. If no queue can be assigned the item is stopped.
When the last item exits a queue, it resets the respective map entries on the Decision Point and sends a message to it. This then lets the point reevaluate where to send the currently waiting item, if there is one.
model-built-mockup-testing-fm.fsm
If you also want the processor to first empty a queue before moving on to the next, I would suggest to have all queues push their items to a list from which the processor pulls. (Meaning the entire batch will be pushed to the list immediately, preventing other items of getting mixed in).
If they are not you will always see cases of "duplication". Take the following example: A queue's maximum content is reached, but because the processor is busy it cannot yet release any items. If another item of the same type arrives it will be assigned to an empty queue if there is one. Some time later items will start to leave the first queue leaving you with two partial queues with the same type.
Whichever queue you choose to route items to now, the other one will probably stay blocked by the partial batch until the max wait time elapses, since it is not guaranteed that the first queue will completely empty out before that time. What should the logic do at this point in your opinion?
same type at same time *
I have done a similar logic but used it in pull,but when i compare it with your logic the results are very similar but not the same dont know why ,kind of confusing .model_built_mockup_testing_final_macro_5.fsm
The pull requirement of the queues is evaluated in the order of their port rankings. So checking queues with a lower port number is actually not needed, since if there was a queue with a lower port that could receive the item it would have and the pull requirement of the current queue would be executed in the first place.
You should do the check the other way around. Start at the highest port rank and work your way down to check if the queue has to leave the item for another queue whose pull requirement will be evaulated later.
Also you are trying to read the "Type" label on the queues without checking if it exists which leads to the error for Queue68 for every arriving item.
Attached is an improved version of the logic I build. It now keeps track of all queues that might be assigned to a type and not only the most recent one. This enables it to choose between multiple possible destinations. This decision is based on how many items are needed in each queue to reach the batch size. The queue with fewer needed items will get priority. The pending number of items to the next batch is tracked together with the current content in the "QtyMap".
I also changed the transport logic. The items are now pushed to a list and the processor pulls them from there. This has two effects. The processor will work in on the items in the order in which the batches were released, rather than pulling the item from the lowest port rank that is available. And this allows me to reset the batch counter in the "Send to Port" code in case a partial batch is released.
The processor could then again use a port connection, since the items will enter the output queue in FIFO order anyway.