Reinforcement Learning to determine the optimal number of queues

Reinforcement Learning to determine the optimal number of queues

Elax
Not applicable
52 Views
6 Replies
Message 1 of 7

Reinforcement Learning to determine the optimal number of queues

Elax
Not applicable

[ FlexSim 23.2.0 ]

Hello,

I am trying to make a reinforcement learning process in Flexsim. In our system, the processor 1 and 2 is a lot faster than processor 3. When starting processor 1 or 2, the system ensures that at least one queue is empty. If all of the queues are occupied, then the item will be held before processor 1/2. I want to determine if there is a way for me to find out what is the required number of queues. This is a simplification of the process that I am trying to optimize. Preferably, I want to use reinforcement learning.


In addition, since each queue has a specific logic (allowing only certain kinds of items to enter, with a certain "weight limit"), is there a way to duplicate the queues without copying the flexscript over and over again? Thanks!

1741887559827.png

Accepted solutions (1)
53 Views
6 Replies
Replies (6)
Message 2 of 7

joerg_vogel_HsH
Mentor
Mentor

@Elax Kot, is there a way? Add more 3D processes and check how utilization changes in relation.

A typical concept of object- oriented programming is inheritance.

https://docs.flexsim.com/en/25.1/Using3DObjects/WorkingWith3DObjects/UsingTemplates/UsingTemplates.h...

You can search in online documentation by keywords. Sometimes you get answers. In this case it would be successful.


Message 3 of 7

moehlmann_fe
Observer
Observer
Accepted solution

You can add (equal) queue to a group and link that group to a model parameter. The option "Delete and Copy Group Members" will create as many copies of the first object in the group as the parameter states.

You can then run an experiment to see how many queues you need (for the upstream processors to never be blocked I assume).

Here's a basic example where the number of processors needed to not get any backlog in the queue is determined that way.

experiment-example.fsm

Another approach would be to not change the number of queues and instead assume an unlimited capacity. By measuring the maximum content you can then calculate how many queues would have been needed to fit all material.

While you say that this is a simplification and I am by no means an expert when it comes to Reinforcement Learning, I don't see how it could be applied here. An RL agent is meant to make decisions during the model run based on the state of the model. Varying the starting conditions is what the experimenter/optimizer is for.

0 Likes
Message 4 of 7

Elax
Not applicable
Thanks! I will look into using inheritance with our codes.
0 Likes
Message 5 of 7

Elax
Not applicable
Thanks! Using experimenter with range is indeed quite useful and would probably work. An additional constraints that I have would be having to accommodate for the custom code that I have added in the individual queue. Each duplication of the queue will have a slight modifications with regards to the tagging when I use it to interact with a global table that tracks the capacity of the queues.


1. Items that enters a queue will trigger an update to a global table.

2. Items that leaves a queue will also trigger an update.

This is doable when I'm manually copying and editing the custom code in each queue. However, I have no idea how to do it when I'm modifying using the experimenter. Is there a way for me to modify the custom code depending on the queue index?

This would be something that I'm looking to solve as well.


I agree with your point about not using RL in this usecase. It does sound like an overcomplication as opposed to just using experimenter. Many thanks for your response again.

0 Likes
Message 6 of 7

moehlmann_fe
Observer
Observer

If possible, I would make the differences in the code depend on labels on the object. Those can be set in the OnSet code of the parameter.

capture1.png

experiment-example_1.fsm

Message 7 of 7

Elax
Not applicable
That'll work wonderfully. Thank you so much!
0 Likes