Get access to the OptQuest algorithm

Get access to the OptQuest algorithm

maryamh11
Collaborator Collaborator
391 Views
10 Replies
Message 1 of 11

Get access to the OptQuest algorithm

maryamh11
Collaborator
Collaborator

[ FlexSim 24.0.0 ]

How can I access and modify the optimizer's algorithm to test different results or implement alternative algorithms?

I found these two posts, but they are from 2017 and didn't answer my question.

https://answers.flexsim.com/questions/44888/is-there-a-way-to-access-andor-change-the-algorith.html

https://answers.flexsim.com/questions/34782/optquests-inner-workings.html

0 Likes
Accepted solutions (1)
392 Views
10 Replies
Replies (10)
Message 2 of 11

joerg_vogel_HsH
Mentor
Mentor
@Maryam H2, what do you expect, that you can modify or which algorithm do you want to use?
0 Likes
Message 3 of 11

jason_lightfoot_adsk
Autodesk
Autodesk
Accepted solution

The answer about Optquest will be the same as Jordan posted in 2017 - nothing has since changed in that regard.

You may additionally want to explore reinforcement/machine learning for which FlexSim has some support in recent versions. (I've yet to see a convincing example of this being worth the effort over implementing a tailored/custom heuristic)

Message 4 of 11

maryamh11
Collaborator
Collaborator

@Joerg Vogel the first step for me is to understand how the current algorithm's logic in optimizer works. And then test some changes; for example if the current one is using GA to find optimal solution I want to change/modify fitness function, or use a completely different algorithm such as Tabu search and PSO and compare the results and see how use of algorithm/parameters can change the results.

0 Likes
Message 5 of 11

maryamh11
Collaborator
Collaborator

@Jason Lightfoot good idea! I'm also willing to explore the RL/ML applications in my model. what are the supports in recent version you are referring to?


0 Likes
Message 6 of 11

jason_lightfoot_adsk
Autodesk
Autodesk

\The link I gave is 22.2 onwards - much later than the version17 links you included in the question.

0 Likes
Message 7 of 11

maryamh11
Collaborator
Collaborator

@Jason Lightfoot I see that in the RL training page the training algorithm is PPO (proximal policy optimization) which is a RL algorithm, however heuritics algorithms such as GA , Tabu search is not well alighned with this type of RL algorithms since heuristics are rule-based algorithms which replace the exhaustive search for optimal solution while algorithms like PPO optimizes the policy by performing gradient ascent on expected returns. Did you mean to improve current PPO / change of PPO to something else or really is there a way to implement heuristics algorithms and compare the outcomes?


0 Likes
Message 8 of 11

jason_lightfoot_adsk
Autodesk
Autodesk

I meant vs. writing your own algorithm/rules in FlexScript - not using RL.

0 Likes
Message 9 of 11

maryamh11
Collaborator
Collaborator

@Jason Lightfoot Do you mean not using RL training module at all or part of it? can you give an example how that's possible?


0 Likes
Message 10 of 11

Ram7
Not applicable

How can I contact you @Jason Lightfoot

0 Likes
Message 11 of 11

jason_lightfoot_adsk
Autodesk
Autodesk

In the past I've used a simple heuristic to decide how to distribute material across reels to satisfy rules about what can be supplied to a customer while minimising waste.

Another example is a sequencing problem to minimize color changing setup times (attached). Often you'll find suggestions to use genetic algorithms or reinforcement learning to find optimal solutions for these types of problems, but for some cases a fast and powerful heuristic may be better.

minSetupSequence.zip

0 Likes