FlexSim Knowledge Base
Announcements, articles, and guides to help you take your simulations to the next level.
Sort by:
This is a demo model for the new warehouse functionality found in version 2019 Update 2: warehouse-demo-model.fsm The basic premise of this model is that items of a particular type come in, and must be placed in slots for that type. Orders also come in, requiring items of a particular type, that must be retrieved from storage. The model is meant to be a general concept model. It demonstrates the use of many of the new features in 19.2, and embodies some high-level "how-to's" of warehousing that are discussed in the user manual. Most logic for the model is implemented in a process flow. The process flow logic is separated into three main categories, namely initial inventory, inbound, and outbound processes. Further, the outbound process demonstrates both random-based order generation as well as history-based order generation. Initial Inventory The model includes a Global Table of Initial Inventory. The process flow's initial inventory section reads this table, and then creates items and places them into slots based on that initial inventory. This logic relies on the Address Scheme defined in the Storage System object, and uses direct addressing to get a slot using Storage.system.getSlot(). Inbound I use the process flow to assign a slot to each incoming item. I use an Assign Labels activity called Find Slot to do this. This uses a pick list option that wraps a call to Storage.system.findSlot(). The query matches the Type of the item with the Type of the slot, and also ensures that the target slot has space to fit the incoming item. The query also randomizes the order. Randomizing the order would likely not be necessary in most situations, but it makes the demo look nice. If the Find Slot activity properly finds a slot to store the item, then I go ahead and assign the the item to that slot, and have an operator place it in the rack. Outbound I also use the process flow to generate orders, and to reserve items in the storage system for those orders. In most warehouse simulations, order generation can be driven in two ways. First, you can use random probability distributions to generate orders based on general throughput metrics. Second, order generation can be based on historical data. This model gives an example of each method. In the random method, orders are generated randomly every ~30 seconds. Each order includes a number of SKU line items (again, random) and each line item includes a quantity of that SKU (again, random). Order tokens spawn line item tokens, which in turn spawn tokens associated with individual picks (the Fill Out Individual Picks process). For each pick, the token finds an item in storage that matches the target SKU. This is an Assign Labels activity (Find Item by SKU) with a pick option that wraps a call to Storage.system.findItem(). It finds an item that matches the required type, again using a query. Once the item is found, it makes reserves the item as "outbound" by assigning the Storage.Item.assignedSlot property to null (Set to Outbound activity). This ensures that no other process will find that same item for picking. The history-based order generation process uses much of the same functionality as the random-based, but it instead reads an "OrderHistory" table to determine when orders are started and what those orders contain. The OrderHistory table represents a simplified format for what you would likely see in a standard orders table. First, the process flow creates a transformed table that aggregates each order into a single row (this could technically be done as post-import code, but I do it in the process flow for visibility). Then the process flow loops through that transformed table, waiting for the start time of each order, then spawning that order. Custom Rack Visualization I have also customized the visualization of the racks. I have added a text to the front (and bottom on the floor) of a rack slot that will show the address of that slot. Further I've given the text a background that is color-coded to the SKU that that slot is designated to store. This was all done through the Storage System's Visualizations tab. I customized the Rack visualization.
View full article
Hi everyone, recently @Arun Kr posted this idea to add a utilization vs. time chart to the available chart types in FlexSim. I had previously build a relatively easy to set up Statistics Collector for use in our models. I have since cleaned up the design a bit and thought to post it here, since this seems to be a commonly desired feature. utilization_vs_time_collector_24_0_fm.fsm All necessary setup is done through labels of the collector. The first three are actually identical to labels found on the default Statistics Collector behind a state bar chart. - Objects should point at a group that contains all objects the collector should track the utilization for. - StateTable is a reference to the state table that will be used to determine which state counts as 'utilized'. - StateProfile is the rank of the state profile that should be read on the linked objects (0 for default state profile). - MeasureInterval is the time frame (in model units) over which the collector will take the average of the utilization. - NumSubIntervals determines how often that measurement is actually taken. In the example image above (and the attached model) the collector measures the average utilization over the last 3600s 12 times within that interval of every 300s. Each meausurement still denotes the utilization over the complete "MeasureInterval". The graph on the left takes a measurement every 5 minutes, the one on the right every 60 minutes. Each point on both graphs represents the average utilization over the previous hour from that time point in time. - StoredTimeMap is used to allow the collector to correctly function past a warmup time, by storing the total utilized value of each object up to that point. This should no be manually changed. Since this last label has to be automatically reset, remember to save any changes made to the other labels by hitting "Apply". The collector works by keeping an array of 'total utilized time' value for each object as row labels. Whenever a measurement is taken, the current value is added to the array and the oldest one is discarded. The difference between the newest and oldest value is used to calculate the average utilization over the measurement interval. The "NumSubIntervals" label essentially just controls how many entries are kept in that array. To copy the collector into another model you can create a fresh collector in the target model. Then copy the node of this collector from the tree of the attached model and paste it over the node of the fresh collector. I hope this can help to speed up the modeling process for some people (at least until a chart like this is hopefully implemented in FlexSim) or serve as inspiration for how one can use the Statistics Collector. I might update the post with a user library version if I get to creating it (and if there is demand for it). Best regards Felix Edit: Added user library with the collector as a dragable icon to the attached files. Edit2: I noticed a bug while using the collector. Having the tracked objects enter states that are marked as "excluded" in the state table would lead to incorrect utilization values (possibly even below 0 or above 100%). Replaced the library with an updated version that fixes this. Edit3: I fixed another bug that resulted in a wrong utilization value for the first measurement after the warmup time if the object spent time in an excluded state prior to the warmup. utilization-vs-time-collector-library-20250212.fsl
View full article
This model and library will allow you to produce a heat map of anything moving in the model - including AGVs and Flowitems. To add this to a model is simply a matter of : 1) Load the attached user library 2) Add objects to the group HeatMapMembers 3) Drop a heat map object (cylinder) into the model - reset and run. With this updated version you can now you can now have multiple mapper objects in the same model showing different groups - made easier by the addition of a 'groupName' label on the mapper. You can easily change the height at which the map is draw using the 'zdraw' label and alter the sampling interval and grid size using the 'heatInterval' and 'resolution' labels. The resolution is the number of divisions per model length unit. In the example model set to metres, a value of 2 gives 4 divisions per square metre. Currently non-flowitems are set to ignore time when the object is in an idle state. HeatMapAnything.fsl HeatMapAnything.fsm
View full article
Attached is an example model and user library comprising commands to return an array of objects whose bounding boxes intersect, and a Collision Detection object to drop into your model. The Collision Detection has a ticker interval label to adjust the frequency of checks and will switch the colliding objects to selected. It looks for two groups - "Obstacles" containing static objects in the scene (which may be overlapping and not recorded as collisions) and "Colliders" which are the objects navigating the scene and should be checked for intersecting bounding boxes. In the example model I'm adding the flowitem when it is created using Group("Colliders").addMember(item) The detector code is on its FlexScript label, 'analyseScene', which is first scheduled to run by the object's reset trigger. collisionDetection3.fsm BBCollisionDetection2.fsl
View full article
FlexSim's Webserver is a query-driven manager and communication interface for FlexSim. It allows you to run FlexSim models through a web browser like Google Chrome, FireFox, Internet Explorer, etc. Since the FlexSim Web Server is a basic service to allow FlexSim to be served to a browser, you may decide you want a way to proxy to this service through a full service web server that you can control security and authentication through. This guide will walk you through proxying to the FlexSim Web Server through Apache web server. Install the FlexSim Web Server Program Download and install the FlexSim Web Server from https://account.flexsim.com Edit C:\Program Files (x86)\FlexSim Web Server\flexsim webserver configuration.txt Change the port from 80 to 8080 Start the FlexSim Web Server by double clicking flexsimserver.bat Test the server by going to http://127.0.01:8080 It should look like this: Install Apache Web Server for Windows Download Apache x64 from https://www.apachehaus.com/cgi-bin/download.plx Extract the httpd-<version>.zip file you have downloaded Go into the httpd-<version> folder you have extracted and copy the Apache24 (or Apache25) folder to C:\ Install Apache Dependencies Make sure you have the Microsoft Visual C++ 2008 SP1 package installed. You can get it here: https://www.microsoft.com/en-US/download/details.aspx?id=26368 Download the vcredist_x64.exe package and run the installation Configure Apache Open C:\Apache24\conf\httpd.conf in a text editor Look for the following lines in this configuration file and remove the # character #LoadModule proxy_module modules/mod_proxy.so #LoadModule proxy_http_module modules/mod_proxy_http.so #LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so Those modules should now look like this: LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so At the bottom of the httpd.conf file, add these 3 lines and save the file: ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ AllowEncodedSlashes On Run Apache Web Server In a file explorer or CMD line prompt, browse to the C:\Apache24\bin folder and run httpd.exe Now that you have the FlexSim Web Server proxied through Apache, you may decide you want to configure Apache to handle security, authentication and customization. Since this is out of the scope of this guide, you can find details on the Internet that can guide you to setting these customizations up. A few resources you may consider: https://community.apachefriends.org/f/ https://stackoverflow.com
View full article
This article reviews one method for making a state Gantt chart for the default and alternate state profiles: Example Model You can download the model for this walkthrough ( stateganttdemo.fsm). The model has two multiprocessors, in a Group called Multiprocessors. Each multiprocessor has two processes: Process1 and Process2. To make the chart, we will first make a Statistics Collector, and then a Calculated Table. Making the Statistics Collector Make a new Statistics Collector. On the Event Listening tab, use the Sampler to listen to On State Change of the group of multiprocessors. You can leave the parameter names alone. However, we need to add a label, so we can record the profile number. Select the new event, and then use the green plus button in the Event Labels area to add a label for this event. Set its name to ProfileNum, and its value to the following code: data.StateProfileNode?.rank The event settings should look something like the following: Next we need to set the row mode. Make sure it's set to Add Per Event, with no row value. As the final configuration step for the statistics collector, we need to set up the columns. There should be four columns in this collector: Time - In the pick options, select Time, then Model Date/Time Object - In the pick options, select IDs, then ID of Event Object Profile - Type data.ProfileNum for the value. The default storage and display format are fine. State Type the following code: data.eventNode.as(Object).stats.state(data.ProfileNum).profile[data.ToState + 1][1] Set the Storage Type to String The code is necessary because On State Change occurs before the state is set to the new state. So the code is looking up the name of the future state in the profile table. When you reset and run this model, you will see a table like the following: Making the Calculated Table Make a new Calculated Table, and give it the following query: SELECT Object, Time as StartTime, LEAD(Time) OVER (PARTITION BY Object) AS EndTime, State FROM StatisticsCollector1 WHERE Profile = 1 This query creates an Object column as well as a Time column. To get the time that the current state ends, we look to when the next state begins. The LEAD() function looks ahead in the table, and the OVER(PARTITION BY Object) clause makes sure that LEAD() makes sure to look to the next row with the same Object. We also record the state column, and filter out the standard state profile, keeping the special multiprocessor state profile. Once you get this query to work, change the Update Mode to By Interval, and set the interval to 20 or 30. Since the Statistics Collector table will get longer and longer, the query will become more and more expensive as the model runs. To control how much time is spent running the query, we use an interval. The final configuration of the Calculated Table should look like this: You will need to set the Display Format of each column on the Display Format tab (Object, Date/Time, Date/Time, and Raw). Making the Chart Make a new dashboard, and create a Gantt chart. Point it at the Calculated Table. When you do that, the chart should fill in all the other columns correctly. Charting Both State Profiles for Both Objects In order to chart both profiles on the same chart, we first need to add a column to the Statistics Collector, and then update the query in the Calculated Table. The new column should be named ObjectAndProfile, and a Storage Type of String. Use the following code for a value: data.eventNode.name + " - " + string.fromNum(data.ProfileNum.as(int)) Then change your query to the following: SELECT ObjectAndProfile, Time as StartTime, LEAD(Time) OVER (PARTITION BY ObjectAndProfile) AS EndTime, State FROM StatisticsCollector1 With these changes, you should be able to view both profiles for both multiprocessors.
View full article
FlexSim 2022 introduced a Reinforcement Learning tool that enables you to configure your model to be used as an environment for reinforcement learning algorithms. That tool makes connecting to FlexSim from a reinforcement learning algorithm easier, but that tool is not absolutely necessary for this type of connectivity. The same socket communication protocols that are used by that tool are available generally in FlexScript. Attached (ChangeoverTimesRL_V22.0.fsm) is the FlexSim 2022 model that you build as part of the Using Reinforcement Learning documentation that walks you through the process of building and preparing a FlexSim model for reinforcement learning, training an agent within that model environment, evaluating the performance of the trained reinforcement learning model, and using that trained model in a real production environment. Also attached (ChangeoverTimesRL_V6.0.fsm) is a model built with FlexSim 6.0.2 from 2012 that does the exact same thing, but with custom FlexScript user commands instead of the Reinforcement Learning tool. You can use this model with the example python scripts and FlexSim 6.0.2 in the same way that you can use the other model with those same scripts in FlexSim 2022. I'm providing this FlexSim 6 model as an example that demonstrates how you can communicate between FlexSim and other programs. The Reinforcement Learning tool certainly makes this type of communication easier and simpler, with a nice UI for specifying RL-specific parameters, but the fundamental principles of how this works have been available in FlexSim for many years using FlexScript. Hopefully this example can help teach and inspire those who wish to control or communicate with FlexSim from external sources for purposes other than just reinforcement learning. FlexSim is flexible, and the possibilities are endless.
View full article
Tokens and Flow Items can be very difficult to add to a chart. This is true because they don't exist on Reset, making them difficult to select. This article shows how you can use a Process Flow to allow a Statistics Collector to record a token's changing label value, and also to chart that value over time: The model for this example is attached (graphlabeldata.fsm). It is a very simple model: The Scheduled Source creates three tokens, each of which create a label called data. This label is created by choosing "Add Tracked Variable" for the value, which opens this dialog box: The reason we want the label to be a Tracked Variable is that Tracked Variables emit an OnChange event. We want to listen to that event. If you use the time interval collection method, discussed later in this article, you don't need to make the label a Tracked Variable. Each token then goes through a loop, where it waits, and then updates the value. This is meant to represent a much more complicated model, where the token travels through many activities, any of which could change its label value. For this example, the model randomly changes the value on the label. Now that we have a token and a label whose value is changing as the model runs, we can work on making a chart. We want to eventually make a Statistics Collector, but Statistics Collectors can only listen to events of objects that exist after the model has been reset. Tokens and FlowItems (along with their labels) are destroyed on reset, and so we can't listen directly to them. However, some Process Flow activities can listen to events on tokens and flowitems, and the Statistics Collector can listen to those events. For that reason, we make a second Process Flow: This flow has an Event-Triggered Source, which listens for tokens to leave the "Init Tracked Variable" activity in the first flow. When that happens, the source creates a token, and that token immediately gets a reference to the label node (note that this is different than the value of the label node). Next, the new token goes to a Start activity. The Start activity called "Log Change." This activity is just a placeholder. While you could technically live without this activity, it makes things a little clearer, as we will discuss later. Other than providing OnEntry and OnExit events, the Start activity has no internal logic whatsoever. After passing through the Log Change activity, the new token waits for the label value to change: In order to listen to this event, you can first sample a Tracked Variable in the Toolbox. This provides the OnChange event. Then you can update the Object field to the code shown above. Notice that every time this event happens, the token simply passes through the Log Change activity, and then resumes listening. When the original label value changes, it emits an OnChange event. When that event fires, the token listening to that even travels through the Log Change activity, which emits OnEntry and OnExit events. We can use these events in the Statistics Collector. The key to this technique is that we used Process Flow, which is good at listening to token and flowitem events, to generate Activity events, which can be used in the Statistics Collector. In the attached model, the first Statistics Collector is configured like this: It simply listens to the On Entry of the Log Change activity. The columns are defined as follows: The first two columns are simpler; the Time column uses the Model Date/Time option: The second column gets the ID of the token as an integer: The third column gets the current value of the Data label: Now that the Statistics Collector is set up, we can configure the chart to use this collector, and split by the Token ID. The process to record the label value every interval (rather than on every change) is very similar. The downside is that the data is less granular, but the upside is that a label doesn't have to be a Tracked Variable to be charted. The example model simply uses a Split activity to copy the data from the Event Triggered Source, and sends it to a similar listening loop: Instead of waiting for the value to change, the second token waits for a fixed time interval. A similar Statistics Collector will allow you to create the following chart: This approach works for every token created by the scheduled source. No matter how many tokens you create, each will show up on the chart:
View full article
The attached model contains functionality to depict the item flow as a 3D map using a FlowMapper3D Object (cylinder) and an associated Object Process Flow. Additionally a 'kpi' label on the object gives an indication of layout performance to which you can link and observe as you interact/experiment on the layout. To set this up in your model you'll need to add a Group of objects whose entry events will be used by the mapper - calling that Group "FlowMapperObjects". Then you'll need to add a ColorPalette called "HeatPalette". Finally you'll want to copy the FlowMapper3D object and the FlowMapperProcess to your model. Note that there is a boolean label 'showPercents' on the FlowMapper3D object to tell it whether to show percentage text or the number of flowitems for each location pair. 3DFlowMapper.fsm
View full article
In version 2018 and on, you can make this chart by dragging the Throughput Per Hour by Type template from the dashboard library. If you install the template (available on the Advanced tab), you will see a Process Flow and a Statistics Collector appear in your toolbox. One of the most common questions from FlexSim users is as follows: How do I make a chart that shows the output every hour? You can make this chart in three steps. Configure the Statistics Collector First, you need a Statistics Collector. Make a new one in the toolbox (click the green plus button, select Statistics, and then select Statistics Collector). On the event listening tab, use the green plus button to add a timer event, and configure as shown here: This timer event will fire every hour (every 3600 seconds) in the model. Notice the shared label, that is storing all members of the Processors group as an array. We will use this label in the next step. Once you have configured the timer, then you need to set up the row mode for this collector. We want one row per processor, and we need to use the Processors label as the row value. Since the Processors label is an array, we will get three rows per timer event, each row corresponding to a processor. Finally, we can add the columns. The three columns are as follows: Time - use the pick list to select Model Date/Time from the Time menu Object - use the pick list to select ID of row value from the IDs menu Output - use the pick list to select Statistic by Object from the Object Statistics menu Use data.rowValue as the object value in the popup If you use the pick options to choose these options, then the storage type and display format options should be set automatically. With these three columns in place, we can watch the table populate. Reset and run the model at high speed. Every model hour, you should see a new set of rows appear, one for each processor in the group. The table will look something like this: Configure the Calculated Table The Statistics Collector table from the previous steps is close to what we want, except that the output value always increases as the model runs. But what about the output for just a single hour? To get that value, we can use a Calculated Table. Make a new calculated table, and give it the following query (in the Query field): SELECT Time, Object, ISNULL(Output - LAG(Output) OVER (PARTITION BY Object), 0) AS OutputPerHour FROM StatisticsCollector1 This query uses SQL window functions. Basically, it says that each row's value should subtract the previous row's value for the object. In addition, if that value is NULL (because it's the first row), then just use a value. If you reset and run the model, so that the collector table has at least a few rows in it, click the Update button to run the query. Notice that the Time and Object columns show numbers. This is because the Calculated Table can't infer the formatting of the column. To set the formatting, use the Display Format Tab. You may also wish the table to update every hour, with the Statistics Collector. Make the Chart Now that our data is correct, we can make a chart. Make a new dashboard, and create a Time Plot chart. Point the chart to the calculated table. Let's use the Time column for the X values, and let's use the OutputPerHour column for the Y values. In addition, make sure to split by the Object column. If the calculated table updates every hour, then running the model should create the chart shown at the beginning of the model. Here is the model used to create this chart (should work in 2017 Update 2 Beta or later; beta must be built on or after August 21, 2017). outputperhourdemo.fsm
View full article
This article explores an example model. In this model, items on downstream lanes are able to reserve dogs so that items on upstream lanes cannot use them: reservedogdemo.fsm About Dogs in FlexSim FlexSim simulates dogs on a power-and-free system in an extremely abstract and minimal way. A dog isn't a persistent entity at all. Instead, FlexSim calculates where dogs would be, given the speed, and when they would interact with items. This has a huge performance benefit. But if your logic needs items to interact with specific dogs, this can pose a problem: how do you interact with such an abstract entity? The Catch Condition The only time you can "see" a dog in FlexSim is during the Conveyor's Catch Condition: https://docs.flexsim.com/en/23.1/Reference/PropertiesPanels/ConveyorPanels/ConveyorBehavior/ConveyorBehavior.html#powerAndFree The catch condition fires when a dog passes by an item. If the catch condition returns a 1, the item catches the dog and transfers to the power and free conveyor. If the catch condition returns a 0, the item does not catch the dog. During the catch condition (and only during a catch condition), you can learn many things about a dog: ID - each dog has an ID. The ID is derived from the length of the conveyor and by the distance the conveyor has travelled. If a conveyor is 26 dogs long, the dogs will have IDs 1 through 26. Location - Since an item is trying to catch the given dog, you can derive the dog's location from the items location. Speed - The conveyor that owns the dog is "current" in the catch condition. So you can get the speed of the conveyor at that point. We'll use all these pieces of information in a moment. Creating Tokens to Represent Dogs The first real insight into this model is to make a dummy item. The purpose of this dummy item is to cause the Catch Condition to fire. It never gets on the conveyor. But when the catch condition fires, it makes a token that represents the dog. In this example, that item has a label called "DogFinder" Here is the relevant code from the catch condition: if (item.DogFinder?) { Object pe = current.DogPE; if (pe.stats.state(1).value == PE_STATE_BLOCKED) { return 0; } double dist = current.MaxDogDist; double speed = current.targetSpeed; double duration = dist / speed; if (!item.labels["DistAlong"]) { item.DistAlong = Vec3(item.getLocation(1, 0, 0).x, 0, 0).project(item.up, current).x; } Token token = Token.create(0, current.DogHandler); token.DistAlong = item.DistAlong; token.Conveyor = current.as(treenode); token.Duration = duration; token.DogNum = dogNum; token.Speed = speed; token.DetectTime = Model.time; token.release(1); return 0; } There's a lot going on in this code: This logic only fires for the fake dog finder item If the photo eye just upstream from the dog is blocked, that means there is an item, and this dog is not available. Return here if that's the case. Figure out how long this dog will last (the duration), assuming the conveyor runs at the same speed. In this model, there's a label on the conveyor called MaxDogDist. This is the distance from the PE to the end of the conveyor, minus 2 meters. If this is the first dog ever, calculate the position of the dog, given the position of the item. Store that on a label. Create a token with all kinds of labels. We'll need all this information to estimate where the dog is later, and to estimate how far it is from other items. Pushing Dog Tokens to a List Once the token is made, we need to push it to a list, so that items can pull them. If all your items are the same size, you can just push the token to a list directly. In this model, however, there are larger items that require two dogs. So there's a batch activity first. The dummy item is far enough back that it can detect two dogs and still push the first dog to the list in time for the first lane. So it holds the dog back in a Batch activity until one of two things happen: Either the next dog token appears, completing the batch. Or the max wait timer on the batch expires, indicating that the next dog is not available. Otherwise, there would have been a token. This duration is based on the conveyor's speed and dog interval. If the batch is complete, the first dog in the batch can be marked as a "double", meaning the dog behind it is also available. Once the flow has determined whether the dog is a single or double, it then pushes it to the list. Creating the DistToDog Field When pulling the dog from the list, an item needs to know the position of the dog relative to the item. Is it 0.3 meters upstream? Or is it 2 meters downstream? When we query the set of dogs, we need to filter out downstream dogs and order by upstream dogs, to reserve the closest one: WHERE DistToDog >= 0 ORDER BY DistToDog Here, DistToDog is positive if the dog is upstream, and negative if the dog is downstream. The code for this field is as follows: /**Custom Code*/ Variant value = param(1); Variant puller = param(2); treenode entry = param(3); double pushTime = param(4); double distAlong = Vec3(puller.getLocation(1, 0, 0).x, 0, 0).project(puller.up, value.Conveyor).x; double dt = Model.time - value.DetectTime; double dx = value.Speed * dt; double dogDistAlong = value.DistAlong + dx; return distAlong - dogDistAlong; This code assumes that the item waiting to merge is the puller. So we calculate the item's "dist along" the main conveyor. Then we estimate the location of the dog since the DogFinder item created the token. Then we can find the difference between the item's position and the dog's position. Pulling Dogs from the List Each incoming lane has a Decision Point. The main process flow creates a token when an item arrives there. At a high level, this token just needs to do something simple: pull an available downstream token. If all the items are the same size, it's that simple. But this example is more complicated! If the item is large, we also need to pull the upstream dog behind the dog we got, so that no other item can get that dog. And it gets even more complicated! It can happen that an item acquires the dog after a double dog. In that case, we need to mark the downstream dog as "not double", so that big items won't try to get it. So most of the logic in the ConveyorLogic flow is handling that case. Using the Dog Finally, the item must be assigned to that dog. The ConveyorLogic flow sets the DogNum label on the item. Then, the catch condition checks to see if the dog matches the item's DogNum. Upstream Items The final piece of this model is allowing upstream items to catch a dog on this conveyor. This model adds a special label to those items called "ForceCatch". The catch condition always returns true for those items.
View full article
If you've ever tried to nest groups of objects inside a hierarchy of planes, you may find the drawing of the planes suboptimal and lacking information: Using the container (modified plane) in the attached user library you can represent the container with just an outline. A settings dashboard is installed with the library along with some user commands and global variables. Corner prisms show the nesting layers under the prism: The option 'Use the container center' allows you to use it as either a plane as before or, when unselected, a bordered frame where dropping an object or clicking and dragging within the borders will behave as though you are dropping onto or clicking/dragging the model floor. You can also choose to hide the containers entirely for the cleanest visuals. I hope this will encourage users to use containers more, since when coupled with Templates and Object Process Flows they can increase the scaleability and make your developed assets more manageable. ( In those cases the container becomes the member instance of the process flow or template master and references to its components are made through pointer labels on the container rather than names which you may want to alter for reporting purposes. The pointer labels are updated automatically when creating a new instance of the container.) ContainerMarkers_v1.fsl If you want planes you already have in your model to adopt this style just add this to their draw code: return containerdraw(view,current);
View full article
In the attached model we use a Timetable and two MTBF/MTTR objects to define Schedule Loss, Availability Loss (breakdowns) and an element of Performance loss due to short stops (state Down). The processor sends 'bad' items to port 2 based on the send to percentage which account for QualityLosses. The processor's 'best' processing time per part (5 seconds) is stored as a label, while the processing time itself is a triangular distribution with the minumum as 5 seconds - so it also contributes to performance loss. When the Type of the item changes a setup time occurs which is the final contributor to performance loss. Two state profiles were added to the processor - one to track production time and another for availability. An object process flow on the processor detects production profile state changes (between On and off shift) and regular Flexsim state changes and determines the availability state that should prevail. A user command getOEEstat is used to access the values which it calculates on demand and stores in a label on the processor called statsMap. The syntax for this command is: getOEEstat(myMachine,"OEE") The list of stats: "ScheduleLoss","AvailabilityLoss","PerformanceLoss","QualityLoss","IdealProdTime","AvailabilityRatio", "QualityRatio","PerformanceRatio", "IdealProdTime", "RunTime", "OEE", "TEE". A group was used to indicate which objects have their OEE tracked, and a stats Collector reads the group members and adds rows at reset. Finally Performance Measures were added for the stats for processor 1. Processor_OEE_2.fsm 2023-08-22 Update: Added 'TEE' stat.
View full article
In this example model you'll see two identical elevator setups. However, you will notice that ElevatorBank1 allows the patient to move to the next floor properly, whereas ElevatorBank1_2 will float the patient up the network node instead of using the elevator. There are a few steps you must follow to ensure you will have a properly working elevator in your model. Make sure everything is working the way you intended, without an elevator. Now you can add in the elevator, select it and then check the 'Connect to Path' box as seen below Now ensure that the elevator is connected to the nearest path node and any nodes above it. One thing to note is that after resetting if you click on any of the Path nodes that the elevator is connected to you will see that the On Arrival Trigger now says Send Message to Request Elevator. This is the code that actually calls the elevator when a patient arrives at the node. The elevator automatically adds this to the nodes connected to when resetting, but this trigger option can be added to any node. Another good practice, especially if patients walk by the elevator without always using it, is to make a separate node off on a spur. That way patients aren't triggering the elevator every time they walk by.
View full article
Modeling LWBS patients is tricky business and can cause exception errors under certain circumstances if not done correctly. The problems usually arise due to patients who exit the model early while there are pending requests queued up in the model for future activities on the patient. The attached model demonstrates the current best practice for safely modeling LWBS patients without the possibility of generating unwanted errors. The modeling technique is very simply. Use the On Entry trigger of the waiting room object(s) to send a delayed message to itself in X minutes, where X is a sample time from an "impatience curve" representing the amount of time a typical patient is willing to wait before they strongly consider leaving. In the On Message trigger of the waiting room object(s), I have written a code snippet that you will want to copy and modify to suit your own modeling requirement. The code snippet in the example model checks to make sure the patient is still in the waiting room waiting for an exam room at the end of the "impatience time" when the message trigger fires. Then I roll the dice using a bernoulli distribution to decide whether or not to have the patient actually leave. The code snippet shows a different probability for each of two patient types (PCI's) and then a default probability of 50 percent for anyone else. Not only does this modeling approach avoid undesirable exception errors, but it is also more accurate and definitely more efficient than using the Quick Properties fields in the Patient Condition panel of a waiting room which requires repetitive function calls every so many minutes throughout the model run!
View full article
Attached is an example model that shows how you can use reversible conveyors for routing/sorting of items. ReversibleRoutingConveyor.fsm Traditionally we've sort of warned against using reversible conveyors for purposes other than accumulation buffers. The main reason I've been hesitant to promote alternative uses is that the routing system for conveyors is, and will continue to be, static. In other words, the path finding algorithm to send an item through a network of conveyors to a destination point does not change when one or more conveyors in the system is reversed. Put another way, "for routing purposes, ..., the conveyor is always assumed to be conveying in its original direction." This naturally makes using reversible conveyors for routing more complex. However, as long as you can still work within those constraints, you can actually get the desired outcome. The attached model does this by 'shortening' the routing decision so that it can always route onto conveyors in their forward direction. The attached model sorts items by color by moving them between two conveyor via a reversible conveyor that conveys in either direction as needed. In order to still work within the 'static routing' rule, I split the reversible conveyor into two separate conveyors that are directed into each other. This way, I can route items onto the reversible section by referencing a conveyor whose primary forward direction always diverts from the line a given item is on. The critical element is that I have to always make sure that when one conveyor is moving forward, the other is reversed, and vice versa. I also have to implement some mutual exclusion, blocking some items so they aren't sent to conveyors in opposite directions. This all is done in the process flow. I honestly don't know how close this example is to a real life situation. We've just received some requests for a reversible conveyor that can do more than just accumulation buffers, and routing/sorting is the main alternate example I can think of. This is one way you can achieve such a result.
View full article
This example model was created to clarify pre-emption of task sequences created in process flows. Specifically it will show: 1) that pre-empting of task sequences generated in a process flow does not require the token generating the sequence to be pre-empted. 2) that you can push a partially created sequence to a tasksequence list type. 3) how to use recursive calls to a sub flow. and afterwards: 4) how the milestone task can be used in a PF generated task sequence 5) the use a task type nodefunction Each queue is a member of the object process flow shown in the picture. Each generates a box along with a job to take it to another queue and pulls an initial task executer from the list of FreeTEs. The first task to travel and collect the box is preemptable and so we push the task sequence to a Premptable Job list before we add that travel task and pull it off the list when the task has completed. The rest of the task sequence is a standard load, travel and unload set of tasks. When the job is complete we know that the TE is free and could preempt another TE - which could be desired if the newly free TE is closer to the pickup queue than the one currently assigned. The CheckPreempt subflow is called with a label referencing the TE that has become free. This TE may not be the original TE that we pulled from the list of FreeTEs when the job was started, and so we discover the current TE by looking for the task sequence's owner object - done with the function findownerobject() since it does not cache the value in the same way ownerobject() does, and we could have had more than two TEs assigned to this tasksequence. Since the task sequence gets destroy I do this in the assign labels activity before we finish the ts: The Check Preempt subtask first tries to pull a preemptable job where the invoking TE is closer than the current executer of that job, and chooses the one where it is the most pronounced difference. The taskssequence list type has the distance field prepopulated and from that the two other field expressions otherDistance and howMuchCloser are derived. If no task sequence is chosen then we push the finishing TE to the FreeTEs list. If we choose a task sequence to preempt then we first label the token with the otherTE and the preempt the task sequence with a simple zero delay task. The we reDispatch the task sequence to the te that is calling Check Preempt. Since we now know that we've freed the otherTE from the task sequence it had we can invoke the Check Preempt subflow for that task executer, knowing that it will either get put onto the FreeTE list or get another task sequence and preempt another TE. This the is a recursive subflow call with the exit condition being that no preempatable tasksequence is found. Note that no token preemption from activities takes place and this is because the tasksequence is still tied to the token and its activity, which will still receive the call back when a task is complete, regardless of which task executer actioned the task. TEpreemptIfCloser1.fsm Next: Changing colors to indicate preemptable TEs and their destination - using milestone and nodefunction tasks. To better see the allocation and preempt activities we'll now change the model such that the TE color reflects the following: White when idle Gray when busy and not preemptable Matching the pickup queue's color when preemptable. First of all I defined two user commands - one to set the color of the TE (setTEcolor) and another (commandNode) to get the executable node of a user commands. This is so that I can add the fn() SetColor activity (it's a custom task of type nodefunction) ; specify the function using: commandNode("setTEcolor") and pass in the queue color as parameter 2 (parameter1 will be the TE that is executing this task). This means the user command code is just: Object te=param(1); Color col=param(2); te.color=col; The milestone task allows us to define a set of tasks that need to be repeat if the TE gets pre-empted. It's added again using the Custom Task activity and selecting the type as Milestone and has a parameter to specify the number of subsequent tasks that should be considered within the preemption range that will trigger tasks to be repeated. Since we want the Travel preemption to retrigger the setting of the color we set the range to 2. However since the token only needs to be notified when the travel task is complete that it should go on to the next process, and until then wait in the travel task we can uncheck the "Wait Until Complete" box of the Milestone and fn() SetColor activities: If you don't clear this for those two tasks we expect some misbehavior/desynchonization with the token. Finally there needs to be a resettrigger on the TE to set the color to white and a regular Change Visual activity after the pickup point is reached to set it to gray. Note I prefer the use of commands to sending messages, which would be another option, mostly because users can tell the intent of the call without having to inspect the message code. TEpreemptifCloser2.fsm
View full article
This article is basically a follow-up to this question: https://answers.flexsim.com/questions/98195/simultaneous-all-or-nothing-list-pulls.html In version 22.1, we added a new FlexScript feature called Coroutines. Basically, this lets you wait for events in the middle of executing FlexScript, using the "await" keyword. Recently, I decided to revisit this question (how to pull from multiple lists at once) and to see if I could use coroutines to simplify the Process Flow. The short answer is: yes! Here is a model that behaves identically (in terms of which token gets its multi-list request filled first) but replaces about 15 activities with 2 Custom Code blocks: multipullexample_coroutines_ordering_onfulfill.fsm In addition to having fewer activities, this model also runs faster (from ~12s to ~2s on my computer). To determine whether behavior was the same, I added a Statistics Collector and logged request/fullfill times and quantities. I did the same in the original model. The token IDs are off because the older model makes more tokens. Here's the older version, but with that stats collector, if you want to do your own comparisons. multipullexample.fsm The key lines of code are found in the Acquire All activity: // ~line 47 List.PullResult result = list.pull("", qty, qty, token, partition, flags); if (result.backOrder) { token.Success = 0; await result.backOrder; return 0; } // ... The new item here is the keyword "await". When the FlexScript execution reaches this line, the FlexScript execution is paused, and waits for the "awaitable" that you specify. In this case, we want to wait for the back order to be fulfilled. In both models, tokens check all lists to see if they can acquire the complete set of resources. In both models, if a token can't immediately fulfill one of its requests, tokens "go to sleep" until something changes. In the old model, tokens would "wake up" if anything was pushed to the correct list. In contrast, tokens in the new model only "wake up" if enough items are pushed to fulfill their back order. Basically, the second model has fewer false "wakeups" and so runs quite a bit faster.
View full article
Sometimes data exists in Google Sheets that needs to be brought in to FlexSim. There are multiple ways to do this, discussed in this article. Copy and Paste This is the easiest method to get data from Google Sheets into FlexSim. Here's how it works: Open the desired sheet in your browser Click the top-left corner to select everything. Copy the data (use ctrl-C) Open FlexSim Create a Global Table if you haven't already Ensure the number of rows and columns in the Global Table is large enough to hold the pasted data. Click on the column header for the first row in the Global Table. Paste the data (use ctrl-V) Pros: Quick, easy Cons: Need to resize the global table correctly beforehand, repeat entire process if data changes. Export/Import via CSV This is also any easy method to get data. Here are the steps: Download your sheet as a csv file. In FlexSim, use the importtable() command to dump the csv into the global table. For example: importtable(Table("GlobalTable1"), "data.csv", 1) You could add this code to your model's OnReset trigger if desired. Pros: Quick, table sized to csv data automatically Cons: Repeat downloading csv if the data changes. Export/Import via XLSX You can also download a google spreadsheet as an Excel file. Then you can use the Excel importer as normal. Pros: Quick, table sized to data automatically, many options for configuring Cons: Repeat downloading xlsx file if the data changes Import via Python This method is more advanced and requires some configuration for the model and your Google account. Once complete, however, changes can be pulled in automatically without any manual steps. Follow the Sheets quickstart for python found here: https://developers.google.com/sheets/api/quickstart/python Following this guide walk you through creating a Google Cloud Project and creating credentials for that project. In addition, consider using this modified python file instead. This file creates a get_values method that the model can call, and that method is also called from main(), so it's easy to test in a python debugger: import os.path from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.errors import HttpError # If modifying these scopes, delete the file token.json. SCOPES = ["https://www.googleapis.com/auth/spreadsheets.readonly"] # The ID and range of a sample spreadsheet. SAMPLE_SPREADSHEET_ID = "----- add your sheet's ID here -------------" SAMPLE_RANGE_NAME = "A1:B" def get_values(): """Shows basic usage of the Sheets API. Prints values from a sample spreadsheet. """ creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists("token.json"): creds = Credentials.from_authorized_user_file("token.json", SCOPES) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( "credentials.json", SCOPES ) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open("token.json", "w") as token: token.write(creds.to_json()) try: service = build("sheets", "v4", credentials=creds) # Call the Sheets API sheet = service.spreadsheets() result = ( sheet.values() .get(spreadsheetId=SAMPLE_SPREADSHEET_ID, range=SAMPLE_RANGE_NAME, valueRenderOption="UNFORMATTED_VALUE") .execute() ) values = result.get("values", []) return values except HttpError as err: return [] def main(): values = get_values() if not values: print("No data found.") return for row in values: print(row) if __name__ == "__main__": main() Save the above script next to your model. Create a user command in your model. Format the user command for python and enter the file name and method name. It might look something like this: /**external python: */ /**/"sheets"/**/ /** \nfunction name:*/ /**/"get_values"/**/ The return type of the command should be var which means any Variant type. Use code like the following to clone the data to a global table: Array values = getValues(); // call the user command. Array colHeaders = values.shift(); for (int i = 1; i <= values.length; i++) { Array row = values; row[0] = nullvar; } Table(values).cloneTo(Table("GlobalTable1")); Add the above code to a reset trigger. Pros: automatic once complete, easy to keep data up-to-date Cons: requires complicated setup, some python coding. The script could be adjusted to download additional ranges, and then return all data at once, but that requires some code ability. Import via HTTPS Google recommends you use a client library to access its APIs. However, it is entirely possible to use HTTPS requests instead. This could all be done from FlexScript, with no additional installations required. Pros: done all from FlexScript, no extra installs Cons: very technical Conclusion There are several ways to extract data from Google Sheets into FlexSim. Each has pros and cons. Choose the one that best fits your circumstances. Good luck!
View full article
The attached model contains a basicTE to mimic some operations of a Tower Crane. You should be able to use it like any other task executer. Labels on the crane allow the speeds and operating heights to be altered. To change the jib/beam length use the label parameter and it will apply at reset. Similarly, to change the height for now just change the tower height and press reset to have the rest attached at the correct height. TowerCrane_basicTEexample.fsm Update: Added a user library that will scale the crane based on the model units. Also changed some labels so that rotational speed is specified there and the jib/beam now uses the object properties for max speed and acceleration. TowerCrane.fsl
View full article
Top Contributors