FlexSim Knowledge Base
Announcements, articles, and guides to help you take your simulations to the next level.
Sort by:
In this article, you will learn how to export data from individual replications to an external database. This is an advanced tutorial, and assumes some exposure to databases and SQL. It also assumes that you are familiar with the Experimenter and/or Optimizer feature in FlexSim. This tutorial uses PostgreSQL, but should be compatible with the most popular SQL database engines. Model Description Let's assume you have a model with at least one Statistics Collector, as well as an experiment ready to go. The model used in this example is fairly simple: As the model runs, a Statistics Collector fills in a table. The table makes a new row for every item that goes into the Sink, recording the time, and the Type label of the item: For the experiment, we don't have any variables, but we do have one Performance Measure: Input of the Sink. Of course, an actual model would have many Variables, Scenarios, and Performance Measures, but this model leaves out those details. For reasons which come up later, this model also has two global variables, called g_scenario and g_replication. We will use these during the experiment phase. Connecting to the Database To connect to a database, you can use the Database Connector tool. Each Database Connector handles the connection to a single database. If you need to connect to more than one database, you will need more than one Database Connector. For this model, I have added a Database Connector, and configured the connection tab as follows: This configuration allows me to connect to a PostgreSQL database called "flexsim_test" running on my computer at port 9001. You can test the connection by clicking the "Test Connection" button. The test attempts to connect, as well as query the list of tables found in the database. Setting Up the Export Next, take a look at the Export tab: This tab specifies that we are exporting data from StatisticsCollector1 to the table called experiment_results in the database (this table should exist before you set up the export). The Append to Table box means that when the export occurs, the data will be added to the table. Otherwise, the data would be cleared from the target table. The other interesting thing here is that we are exporting more columns than the Statistics Collector has, namely the Scenario and Replication number. However, the expression in the From FlexSim Column column must be valid FlexQL (FlexSim's internal SQL language). Wrapping the values in Math.floor() leaves the values unchanged, and works well with FlexQL. Setting up the Experimenter Finally, let's take a look at just a little bit of code that makes the Experimenter dump data to a database. There is code in Start of Experiment, Start of Replication, and End of Replication: In the Start of Experiment, we need to clear the table. The code in this trigger connects to the database, runs the necessary query, and the closes the connection to the database. In the Start of Replication trigger, the code simply copies the replication and scenario values into the global variables we created for this purpose. In the End of Replication trigger, the code uses function_s to call "exportAll" on the database connector that we created. This uses the settings on the Export tab to dump the data from StatisticsCollector1 to the database table, including the replication and scenario columns. These same triggers run during an optimization, so the same logic will apply. Run the Experiment Finally, you can run the experiment. When each replication ends, the Statistics Collector data will be exported to the database. Databases can handle many connections simultaneously. This is important, because the child processes that run replications will all open individual connections to the database at the end of each replication. This leads to many concurrent connections, which most databases are designed to handle. But how can we tell that it worked? If you connect to the database with another tool, you can see the result table. Note that there are 1700 rows, and that data from Replication 5 is included. Additionally, you could use the Import tab to import all the data, or summary of the data data, into FlexSim. Conclusion Storing data in a database during an experiment is not difficult to do. There are many excellent tools for analyzing and visualizing database tables, which you could then use to further explore and understand your system. postgresqlexperimentdemo.fsm
View full article
FlexSim's Webserver is a query-driven manager and communication interface for FlexSim. It allows you to run FlexSim models through a web browser like Google Chrome, FireFox, Internet Explorer, etc. Since the FlexSim Web Server is a basic service to allow FlexSim to be served to a browser, you may decide you want a way to proxy to this service through a full service web server that you can control security and authentication through. This guide will walk you through proxying to the FlexSim Web Server through Nginx web server. Install the FlexSim Web Server Program Download and install the FlexSim Web Server from https://account.flexsim.com Edit C:\Program Files (x86)\FlexSim Web Server\flexsim webserver configuration.txt Change the port from 80 to 8080 Start the FlexSim Web Server by double clicking flexsimserver.bat Test the server by going to http://127.0.01:8080 It should look like this: Install Nginx Reverse Proxy From a browser, visit http://nginx.org/en/download.html Download latest stable release for Windows Extract the downloaded nginx-<version>.zip Rename the unzipped nginx-<version> folder to nginx Copy the nginx folder to C:\ Double click the C:\nginx\nginx.exe file to launch Nginx Test Nginx by going to http://127.0.0.1 It should look like this: Configure Nginx to proxy to the FlexSim Web Server Open C:\nginx\conf\nginx.conf in a text editor Find the section that says: location / {    root html;    index index.html index.htm; } Edit out the root and index directives and add a proxy_pass directive so it appears like this: location / {       proxy_pass http://127.0.0.1:8080;    #root html;    #index index.html index.htm; } Save the nginx.conf file Reload Nginx to Apply the Changes Open a command line window by pressing Windows+R to open "Run" box. Type "cmd" and then click "OK" From the command line windows, type the following to change to the nginx directory: C:\nginx>cd C:\nginx and press enter Now, type the following to reload Nginx: C:\nginx>nginx -s reload Test the FlexSim Web Server Being Proxied by Nginx From a browser window again go to http://127.0.0.1 You should now see the FlexSim Web Server interface proxied through Nginx Now that you have the FlexSim Web Server proxied through Nginx, you may decide you want to configure Nginx to handle security, authentication and customization. Since this is out of the scope of this guide, you can find details on the Internet that can guide you to setting these customizations up. A few resources you may consider: https://forum.nginx.org https://stackoverflow.com
View full article
In version 2018 and forward, you can make this chart using a Chart Template. You can simply drag and drop the chart from the dashboard library. This article may help you understand how the chart template works. You can use the Install button on the chart template to view the Process Flow, Statistics Collector, and Calculated Table that make the chart. This article reviews how to use the Zone, along with the Statistics Collector, to create a bar chart of the current work-in-progress (WIP) by item type. The method scales to as many types as you need (this example uses 30 types), and can easily adapted to text data, like SKU. An example model ( zonecontentdemo.fsm) demonstrates this method. Creating the Process Flow To create this chart, we first need to gather the data for this chart. In this case, it is easiest to build off the capabilities of the Zone. In particular, we will use Zone Partitions to categorize all of our object. After we set up the Zone, we'll use a Statistics Collector to gather the data that we need. Create a new General Process Flow. You can have as many General Process Flow objects as you want, so let's one that just deals with gathering statistics. This way, gathering statistics will not interfere with the logic in our model. The process flow should look something like this: Here's how it works. The Listen to Entry is configured to listen to a group of objects. In this case, the group contains all the sources in the model, and it's listening to the OnExit of the sources. However, it could be OnEntry or OnExit of any group of 3D objects. If you want to split the statistics by Type or SKU, then any flowitem that reaches the entry group already has the appropriate labels. In this example model, when a flowitem leaves any source, a token gets created. The token makes a label called Item that stores a reference to the created item, as shown in the following picture. The next step is to link the flowitem with the token that represents it. The Link Token to Item is configured like this: Now, the Item has a label that links back to the token. The token then enters a zone. The Zone is partitioned by type: At last, the token comes to a decide activity. The decide is configured not to release the token. The token will be released by the second part of the flow. Once the token is released, it exits the zone, and goes to a sink. The second part of the flow also has an event-triggered source, that is configured to listen to all the sinks in the model. Again, the entry objects and exit objects are arbitrary; you can gather data for the entire model, or for just a small section of the model, using this method. The event triggered source also caches off the item in a label. At this point, we need to release the token that was created when items entered the system. To do this, the Release Token activity is configured as seen here: The token created on the exit side has a reference to the item, which has a reference back to the token created on the entry side. We release this token to 1, which means connector 1. Note: We could have used a wait for event activity in the zone, and then used the match label option to wait for the correct item to leave the system. However, this method is much, much faster, especially as the number of tokens grows. Creating the Bar Chart Statistics Collector The next step is to create a statistics collector that gathers data appropriate for a bar chart. Note that this method will grow the number of rows dynamically, so that it won't matter how many types (or SKUs) your model has; you will still get one bar per type/SKU. In order to make the number of rows dynamic, we need to listen to the OnEntry and the OnExit of the zone activity: Notice the shared label on this collector. Because this label is shared, both the OnEntry and OnExit events will create this label on the data object. The value of this label is the item's type. Next, we move to the Data Recording tab. Set the Row Mode to Unique Row Values, and set the row value to Partition. This means that whenever an event fires, the statistics collector will look at the partition label on the data object. If the value is new to the statistics collector, the collector will make a new row for this value. If not, then the collector will use the row that is already present. Finally, we need to make our columns. We only need two columns: one for the Partition, and one for the Content of that partition. Both of these columns can use the Integer storage type, and raw display format. However, if your partition value was text-based, like an SKU, you should use the String storage type. The value for Partition is just the data object's Partition label: Notice that the Update option is set to When Row is Added. This way, the statistics collector knows that this value will not change, and that it's available at the time the row is created. The other column is a little harder, because we need to use the getstat command: The getstat command arguments depend on the stat you are trying to get. In this case, we are asking the zone (current, the event node) for the Partition Content statistic. We want the current value. Since this is a process flow activity, we pass in the instance as the next argument. Finally, we pass in which partition we want to get the data from, the row value. In this case, we could have identically passed in data.Partition. Also, notice that this column is updated by event dependency. To make sure this does what we want, we need to edit the event/column dependency table. We want the Content column to be updated when items enter and exit, so it should look like this: Now, open the table for the statistics collector. You should see two columns. When you run the model, rows will be added as items of different types are encountered. The table will look something like this: This screenshot came from early in the model, before all 30 types of item has been encountered, so it doesn't have 30 rows yet. Making the Bar Chart This is the easiest part. Create a new dashboard, and add a new bar chart. Point the chart at the statistics collector. For the Bar Title option, choose the Partition column. Be sure to include the Content column. Also, make sure that the "Show Percentages" checkbox on the Settings tab is cleared. The settings should look like this: The resulting chart looks something like the following image. You can set the color on the Colors tab. Ordering the Data Because the rows of this table are created dynamically, the order of the rows will likely change run to run. To force an ordering, you can use a calculated table. Since the number of rows on this table don't grow indefinitely, and the number is relatively small, it's okay to set the Update Mode on the calculated Table to always. Here's what the properties of that calculated table look like: We simply select all columns from the target collector (CurrentContent, in this case) and order it by the Partition column. That yields an ordered bar chart: Example and Additional Charts The attached example model demonstrates this method, as well as how to create a WIP By Type vs Time chart: Happy data collecting! zonecontentdemo.fsm
View full article
One of the most powerful features of Process Flow is the ability to easily define a Task Sequence. However, many real-life situations require the coordination of multiple workers and machines to do a single task. This article demonstrates one approach you can use with Process Flow to coordinate multiple Task Executers, or in other words, to create a coordinated task sequence. This article talks about an example model (handoff.fsm). It might be easiest to open that model, watch it run, and perhaps read this article with the model open. The Example Scenario Here is a screenshot of the demo model used in this article: Items enter the system on the left. The yellow operator must carry each item to the queue in the middle, and then wait for the purple operator to arrive. Once the two operators are both at the middle queue, then the yellow operator can unload the box, and the purple operator can take it. After this point, the yellow operator is free to load another item from the left queue. The purple operator takes the item, waits for a while, and then puts the item in the sink on the right. The interesting part of this model is the hand off. The yellow operator must wait for the purple operator, and vice versa. This is the synchronization point, and it requires coordination of both operators. The approach used in this model allows you to add more operators to the yellow side, and more to the purple side. But it still maintains that a yellow operator must wait for a purple operator before unloading the box. The Example Model In addition to to the 3D layout shown previously, there are 5 process flows in the example model. The first is a general flow, and defines the logic for each task. The second is a Task Executor flow, and defines the logic for the yellow operator. The third is also a Task Executor flow, and defines the logic for the purple operator. The remaining two flows are synchronization flows, for synchronizing between the other three flows. Synchronizing on a Task The basic approach in this model uses the Synchronize activity. This activity waits for one token from each incoming connector, before it allows any of the tokens to move on. Here is the Yellow Purple Sync flow (a global Sub Flow) from the example model: The flows for the yellow and purple operators each use the Run Sub Flow activity to send a token to this flow, to their respective start activities (you can use the sampler on the Run Sub Flow activity to sample a specific start activity in a sub flow). This is what allows both the yellow and purple operators to wait for each other. However, it is important that the yellow and purple operators are both doing the same task. In this model, there is a token that represents each item that needs to be moved. Both operators get a reference to this task token. The Synchronize activity is set up to partition by that Task token. That means that a yellow and purple operators must both call this sub flow with the same task token, ensuring that each task has its own synchronization. In the example model, this kind of synchronization happens between operators, and it happens between each task and an operator. Basically, the task must wait for the operator to finish that operator's part. The Task Flow A task token is created every time an item enters the first queue. The tasks flow puts that task on both the Yellow and Purple lists. In both cases, the task token does not wait to be pulled, but keeps itself on the list. Then, the task token waits for a yellow operator to finish with it, and then for the purple operator to finish with it. There is a zone in this flow, but its only purpose it to gather statistics for how long the whole task took. The Yellow and Purple Flows These flows are easiest to understand when viewed side by side: Recall that each task is put on both the Yellow and Purple lists at the exact same model time. The yellow operator waits to get a task (at the Get Task activity). Then the operator travels to the first queue, gets the item, and travels to the second queue. At this point, the yellow operator waits. At the same time, the purple operator is also waiting for the task. The purple operator just has to travel to the second queue before waiting for the yellow. Once the yellow operator arrives, the purple operator also has to wait for the yellow operator to unload the box. On the yellow side, once the purple operator arrives, the yellow operator unloads the box, and then synchronizes with the purple operator, allowing the purple operator to load the box. Summary The purpose of this article is to show one method for synchronizing token in separate flows. That method is as follows: Have a token for each task. As each task executor (or fixed resource) needs to synchronize, they each use a Run Sub Flow activity, putting the token in a specific Start activity. The Sub Flow (a global Sub Flow) has a synchronize activity, that requires a token from each participant for that task before releasing the tokens. This is certainly not the only way to create this model. However, there are some advantages: By forcing the task to synchronize, you can gather stats on how long each phase of the task took, as well as how long the complete task took. You can add more yellow or purple operators by copy/paste. They simply follow their own logic Each set of logic is separated; tasks, yellow operators, and purple operators each have their own flows, making each one much simpler. The exact approach used in the example model will not work exactly as it is for each model. However, you can apply the general principles, and adapt them to your own situation.
View full article
The attached model contains a basicTE to mimic some operations of a Tower Crane. You should be able to use it like any other task executer. Labels on the crane allow the speeds and operating heights to be altered. To change the jib/beam length use the label parameter and it will apply at reset. Similarly, to change the height for now just change the tower height and press reset to have the rest attached at the correct height. TowerCrane_basicTEexample.fsm Update: Added a user library that will scale the crane based on the model units. Also changed some labels so that rotational speed is specified there and the jib/beam now uses the object properties for max speed and acceleration. TowerCrane.fsl
View full article
Attached is an example DLL Maker project that calls a Java function using the Java Native Interface (JNI) from C++. The steps for how this works are outlined below: 1. Install the Java Development Kit (JDK). In creating this example, I used OpenJDK installed via Microsoft's special Installer of Visual Studio Code for Java developers. 2. Write a Java program. In my example, I created a simple Hello class with an intMethod() as described in IBM's JNI example tutorial. 3. Compile the .java file into a .class file. I did this using the Command Prompt and executing the following command: javac Hello.java 4. Configure the DLL Maker project to include the JNI library: In VC++ Directories > Include Directories, add the following two directories: C:\Program Files (x86)\Java\jdk1.8.0_131\include C:\Program Files (x86)\Java\jdk1.8.0_131\include\win32 In Linker > Input > Additional Dependencies, add jvm.lib In Linker > Input > Delay Loaded Dlls, add jvm.dll 5. Include jni.h in your code. (Line 51 of mydll.cpp) 6. Create a JVM in your code. (Lines 61-102 of mydll.cpp) 7. Get a handle to the method you want to call and then call it. (Lines 118-126 of mydll.cpp) 8. Connect a User Command in FlexSim to fire that C++ code that executes Java code. See the dll_maker_test_model.fsm included in the attached zipped directory. * Note: this example was built with 32-bit FlexSim because the JDK I installed was 32-bit. Using a 64-bit JVM is beyond the scope of this simple example and is left as an exercise for the user.
View full article
In this example model you'll see two identical elevator setups. However, you will notice that ElevatorBank1 allows the patient to move to the next floor properly, whereas ElevatorBank1_2 will float the patient up the network node instead of using the elevator. There are a few steps you must follow to ensure you will have a properly working elevator in your model. Make sure everything is working the way you intended, without an elevator. Now you can add in the elevator, select it and then check the 'Connect to Path' box as seen below Now ensure that the elevator is connected to the nearest path node and any nodes above it. One thing to note is that after resetting if you click on any of the Path nodes that the elevator is connected to you will see that the On Arrival Trigger now says Send Message to Request Elevator. This is the code that actually calls the elevator when a patient arrives at the node. The elevator automatically adds this to the nodes connected to when resetting, but this trigger option can be added to any node. Another good practice, especially if patients walk by the elevator without always using it, is to make a separate node off on a spur. That way patients aren't triggering the elevator every time they walk by.
View full article
Many simulation models need to show operator or machine utilization by time period of day, or per shift: shiftutilization.fsm Under Construction Summary Until this article is completed, here is the basic idea. I use a process flow to create one token per operator in the group. Each operator token creates one token per shift. The shift token creates its own Categorical Tracked Variable (which is how state information is stored on all objects). The shift token just remains alive, holding the tracked variable. The oeprator token goes into a loop, listening to OnChange of the 3D Operator's state profile node. Whenever the state changes, the token sets the shift token's state label to the same state. This essentially copies the state changes onto another tracked variable. Another child token listens for when the shift changes. When the shift changes, the operator token puts the current shift profile into some unused state (STATE_SCHEDULED_DOWN, in this case), and then applies new state changes to the correct shift token's label. A Statistics Collector listens for when the Operator token starts the main loop at the beginning of each shift. When this happens, the collector saves a row value that is an array, composed of the operator and the shift token. The row mode is set to unique, so that each unique combo of operator/profile gets its own row. Each column in the collector is made to show the data from the shift corresponding to that row. The result is the table shown in the article, which can be plotted with a pie chart as shown above.
View full article
This article explores an example model. In this model, items on downstream lanes are able to reserve dogs so that items on upstream lanes cannot use them: reservedogdemo.fsm About Dogs in FlexSim FlexSim simulates dogs on a power-and-free system in an extremely abstract and minimal way. A dog isn't a persistent entity at all. Instead, FlexSim calculates where dogs would be, given the speed, and when they would interact with items. This has a huge performance benefit. But if your logic needs items to interact with specific dogs, this can pose a problem: how do you interact with such an abstract entity? The Catch Condition The only time you can "see" a dog in FlexSim is during the Conveyor's Catch Condition: https://docs.flexsim.com/en/23.1/Reference/PropertiesPanels/ConveyorPanels/ConveyorBehavior/ConveyorBehavior.html#powerAndFree The catch condition fires when a dog passes by an item. If the catch condition returns a 1, the item catches the dog and transfers to the power and free conveyor. If the catch condition returns a 0, the item does not catch the dog. During the catch condition (and only during a catch condition), you can learn many things about a dog: ID - each dog has an ID. The ID is derived from the length of the conveyor and by the distance the conveyor has travelled. If a conveyor is 26 dogs long, the dogs will have IDs 1 through 26. Location - Since an item is trying to catch the given dog, you can derive the dog's location from the items location. Speed - The conveyor that owns the dog is "current" in the catch condition. So you can get the speed of the conveyor at that point. We'll use all these pieces of information in a moment. Creating Tokens to Represent Dogs The first real insight into this model is to make a dummy item. The purpose of this dummy item is to cause the Catch Condition to fire. It never gets on the conveyor. But when the catch condition fires, it makes a token that represents the dog. In this example, that item has a label called "DogFinder" Here is the relevant code from the catch condition: if (item.DogFinder?) { Object pe = current.DogPE; if (pe.stats.state(1).value == PE_STATE_BLOCKED) { return 0; } double dist = current.MaxDogDist; double speed = current.targetSpeed; double duration = dist / speed; if (!item.labels["DistAlong"]) { item.DistAlong = Vec3(item.getLocation(1, 0, 0).x, 0, 0).project(item.up, current).x; } Token token = Token.create(0, current.DogHandler); token.DistAlong = item.DistAlong; token.Conveyor = current.as(treenode); token.Duration = duration; token.DogNum = dogNum; token.Speed = speed; token.DetectTime = Model.time; token.release(1); return 0; } There's a lot going on in this code: This logic only fires for the fake dog finder item If the photo eye just upstream from the dog is blocked, that means there is an item, and this dog is not available. Return here if that's the case. Figure out how long this dog will last (the duration), assuming the conveyor runs at the same speed. In this model, there's a label on the conveyor called MaxDogDist. This is the distance from the PE to the end of the conveyor, minus 2 meters. If this is the first dog ever, calculate the position of the dog, given the position of the item. Store that on a label. Create a token with all kinds of labels. We'll need all this information to estimate where the dog is later, and to estimate how far it is from other items. Pushing Dog Tokens to a List Once the token is made, we need to push it to a list, so that items can pull them. If all your items are the same size, you can just push the token to a list directly. In this model, however, there are larger items that require two dogs. So there's a batch activity first. The dummy item is far enough back that it can detect two dogs and still push the first dog to the list in time for the first lane. So it holds the dog back in a Batch activity until one of two things happen: Either the next dog token appears, completing the batch. Or the max wait timer on the batch expires, indicating that the next dog is not available. Otherwise, there would have been a token. This duration is based on the conveyor's speed and dog interval. If the batch is complete, the first dog in the batch can be marked as a "double", meaning the dog behind it is also available. Once the flow has determined whether the dog is a single or double, it then pushes it to the list. Creating the DistToDog Field When pulling the dog from the list, an item needs to know the position of the dog relative to the item. Is it 0.3 meters upstream? Or is it 2 meters downstream? When we query the set of dogs, we need to filter out downstream dogs and order by upstream dogs, to reserve the closest one: WHERE DistToDog >= 0 ORDER BY DistToDog Here, DistToDog is positive if the dog is upstream, and negative if the dog is downstream. The code for this field is as follows: /**Custom Code*/ Variant value = param(1); Variant puller = param(2); treenode entry = param(3); double pushTime = param(4); double distAlong = Vec3(puller.getLocation(1, 0, 0).x, 0, 0).project(puller.up, value.Conveyor).x; double dt = Model.time - value.DetectTime; double dx = value.Speed * dt; double dogDistAlong = value.DistAlong + dx; return distAlong - dogDistAlong; This code assumes that the item waiting to merge is the puller. So we calculate the item's "dist along" the main conveyor. Then we estimate the location of the dog since the DogFinder item created the token. Then we can find the difference between the item's position and the dog's position. Pulling Dogs from the List Each incoming lane has a Decision Point. The main process flow creates a token when an item arrives there. At a high level, this token just needs to do something simple: pull an available downstream token. If all the items are the same size, it's that simple. But this example is more complicated! If the item is large, we also need to pull the upstream dog behind the dog we got, so that no other item can get that dog. And it gets even more complicated! It can happen that an item acquires the dog after a double dog. In that case, we need to mark the downstream dog as "not double", so that big items won't try to get it. So most of the logic in the ConveyorLogic flow is handling that case. Using the Dog Finally, the item must be assigned to that dog. The ConveyorLogic flow sets the DogNum label on the item. Then, the catch condition checks to see if the dog matches the item's DogNum. Upstream Items The final piece of this model is allowing upstream items to catch a dog on this conveyor. This model adds a special label to those items called "ForceCatch". The catch condition always returns true for those items.
View full article
FlexSim 2018 includes functionality for creating a custom table view GUI using a custom data source from within FlexSim. This article will go through the specifics of how to set this up. The Callbacks Custom Table Data Source defines callbacks for how many rows and columns a table should display, what text to display in each of the cells, if they're read only etc. If you require further control of your table you can use a DLL or module and sub class the table view data source in C++. An example of how this data source is used can be seen in the Date Time Source activity properties. In the above table, the data being used to display both of these tables is exactly the same. The raw treenode table data can be seen on the right. The table on the left is displaying the hours and minutes for each start and end time rather than the the start time and duration in seconds. In FlexSim 2018, there are a number of tables that currently utilize this new data source. They can be found at: VIEW:/modules/ProcessFlow/windows/DateTimeArrivals/Table/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/StatePieOptions/SplitterPane/States/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/CompositeStatePieOptions/SplitterPane/States/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/StateBarOptions/SplitterPane/States/Table Copying one of these tables can be a good starting point for defining your own table. The first thing you have to do in order for FlexSim to recognize that your table is using a custom data source is to add a subnode to the style attribute of your table GUI. The node's name must be FS_CUSTOM_TABLE_VIEW_DATA_SOURCE with a string value of Callbacks. Set your viewfocus attribute to be the path to your data. This may just be a variable within your table. It's up to you to define what data will be displayed. If you want to have the default functionality of the Global Table View, you can use the guifocusclass attribute to reference the TableView class. This gives you features like right click menus, support for cells with pointer data (displays sampler), tracked variables (displays edit button) and FlexScript nodes (displays code edit button). If you're going to use this guiclass, be sure to reference the How To node (located directly above the TableView class in the tree) for which eventfunctions and variables you can or should define. At this point you're ready to add event functions to your table. There are only two required event functions. The others are optional and allow you to override the default functionality of the table view. If you choose not to implement the optional callback functions, the table view will perform its default behavior (whether that's displaying the cell's text value, setting a cell's value, etc). These event function nodes must be toggled as FlexScript. The following event functions are required: ---getNumRows--- This function must return the number of rows to display in the table. 0 is a valid return value. param(1) - View focus node ---getNumCols--- This function must return the number of columns to display in the table. 0 is a valid return value. param(1) - View focus node The following event functions are optional: ---shouldDrawRowHeaders--- If this returns 1, then the row headers will be drawn. The header row uses column 0 in the callback functions. param(1) - View focus node ---shouldDrawColHeaders--- If this returns 1, then the column headers will be drawn. The header column uses row 0 in the callback functions. param(1) - View focus node ---shouldGetCellNode--- Allows you to decide whether you want to override the table view's default functionality of getting the cell node. param(1) - The row number of the displayed table param(2) - The column number of the displayed table If shouldGetCellNode returns 1 then the following function will be called: ---getCellNode--- Return the node associated with the row and column of the table. param(1) - The row number of the displayed table param(2) - The column number of the displayed table ---shouldSetCellValue--- Allows you to decide whether you want to override the table view's default functionality of setting a cell's value. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldSetCellValue returns 1 then the following function will be called: ---setCellValue--- Here you can set your data based upon the value entered by the user. param(1) - The row number of the displayed table param(2) - The column number of the displayed table param(3) - The value to set the cell ---isCustomFormat--- Allows you to decide whether you should define a custom text format for the cell. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If isCustomFormat returns 1 then the following functions will be called: ---getTextColor--- Return an array of RGB components that will define the color of the text. Each component should be a number between 0 and 255. [R, G, B] for example red is [255, 0, 0]. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The text to be displayed in the cell param(5) - The desired number precision ---getTextFormat--- Return 0 for left align, 1 for center align and 2 for right align. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The text to be displayed in the cell param(5) - The desired number precision ---shouldGetCellColor--- Allows you to decide whether you should define a color for the cell's background. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldGetCellColor returns 1 then the following function will be called: ---getCellColor--- Return an array of RGB components that will define the cell's background color. Each component should be a number between 0 and 255. [R, G, B] for example red is [255, 0, 0]. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table ---shouldGetCellText--- Allows you to decide whether you want to override the table view's default functionality of getting a cell's text. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldGetCellText returns 1 then the following function will be called: ---getCellText--- Return a string that is the text to display in the cell. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The desired number precision param(5) - 1 if the cell is being edited, 0 otherwise ---shouldGetTooltip--- Allows you to decide whether a tooltip should be displayed when the user selects a cell in the table. param(1) - The cell node as defined by getCellNode param(2) - The row number of the selected cell param(3) - The column number of the selected cell If shouldGetTooltip returns 1 then the following function will be called: ---getTooltip--- Return a string that is the text to display in the tooltip. param(1) - The cell node as defined by getCellNode param(2) - The row number of the selected cell param(3) - The column number of the selected cell ---isReadOnly--- Return a 1 to make the cell read only. Return a 0 to allow the user to edit the cell. param(1) - The row number of the displayed table param(2) - The column number of the displayed table
View full article
The attached model contains an object process flow for a basicFR to sit between a regular conveyor and a Mass Flow. It looks at the interval between the discrete items leaving the regular conveyor and creates a new generative rate when it detects a change, allowing you to convert a stream of hundreds of items to singular events for the MFC as needed and capitalize on the advantages of MFCs. It's a simple process flow: To use just connect a BasicFR between the two conveyors and add it to the object process flow as a member instance. This is a proof of concept and does not currently handle accumulation across the interface, or aggregation from multiple discrete conveyors. MFC_RateMaker.fsm
View full article
If you've ever tried to nest groups of objects inside a hierarchy of planes, you may find the drawing of the planes suboptimal and lacking information: Using the container (modified plane) in the attached user library you can represent the container with just an outline. A settings dashboard is installed with the library along with some user commands and global variables. Corner prisms show the nesting layers under the prism: The option 'Use the container center' allows you to use it as either a plane as before or, when unselected, a bordered frame where dropping an object or clicking and dragging within the borders will behave as though you are dropping onto or clicking/dragging the model floor. You can also choose to hide the containers entirely for the cleanest visuals. I hope this will encourage users to use containers more, since when coupled with Templates and Object Process Flows they can increase the scaleability and make your developed assets more manageable. ( In those cases the container becomes the member instance of the process flow or template master and references to its components are made through pointer labels on the container rather than names which you may want to alter for reporting purposes. The pointer labels are updated automatically when creating a new instance of the container.) ContainerMarkers_v1.fsl If you want planes you already have in your model to adopt this style just add this to their draw code: return containerdraw(view,current);
View full article
This article is basically a follow-up to this question: https://answers.flexsim.com/questions/98195/simultaneous-all-or-nothing-list-pulls.html In version 22.1, we added a new FlexScript feature called Coroutines. Basically, this lets you wait for events in the middle of executing FlexScript, using the "await" keyword. Recently, I decided to revisit this question (how to pull from multiple lists at once) and to see if I could use coroutines to simplify the Process Flow. The short answer is: yes! Here is a model that behaves identically (in terms of which token gets its multi-list request filled first) but replaces about 15 activities with 2 Custom Code blocks: multipullexample_coroutines_ordering_onfulfill.fsm In addition to having fewer activities, this model also runs faster (from ~12s to ~2s on my computer). To determine whether behavior was the same, I added a Statistics Collector and logged request/fullfill times and quantities. I did the same in the original model. The token IDs are off because the older model makes more tokens. Here's the older version, but with that stats collector, if you want to do your own comparisons. multipullexample.fsm The key lines of code are found in the Acquire All activity: // ~line 47 List.PullResult result = list.pull("", qty, qty, token, partition, flags); if (result.backOrder) { token.Success = 0; await result.backOrder; return 0; } // ... The new item here is the keyword "await". When the FlexScript execution reaches this line, the FlexScript execution is paused, and waits for the "awaitable" that you specify. In this case, we want to wait for the back order to be fulfilled. In both models, tokens check all lists to see if they can acquire the complete set of resources. In both models, if a token can't immediately fulfill one of its requests, tokens "go to sleep" until something changes. In the old model, tokens would "wake up" if anything was pushed to the correct list. In contrast, tokens in the new model only "wake up" if enough items are pushed to fulfill their back order. Basically, the second model has fewer false "wakeups" and so runs quite a bit faster.
View full article
ContinuousUtilization_4.fsm In the attached model, you'll find a Utilization vs Time chart driven by a Statistics Collector and a Process Flow. This article is an alternative approach to the chart shown here: https://answers.flexsim.com/content/kbentry/158947/utilization-vs-time-statistics-collector.html Thanks @Felix Möhlmann for putting that article together. The concept in this model is very similar, but this is a different approach. There are pros and cons to each method. This method offers some performance improvements at the cost of writing more FlexScript code. This method also doesn't handle Warmup time, where the other method does. The general idea in this model is to keep a history of state changes for the previous hour (or other time interval). The history adds new info as states continue to change and drops old info as it "expires" by being older that the time interval. I chose to use a Bundle (stored on a token label) for the history. Bundles are optimized to add data to the end. If a bundle is paged (they are by default), then they are also optimized to remove data from the beginning. [Begin Technical Discussion - TLDR; Bundles are fast at removing the first row and adding new rows] A paged bundle keeps blocks of memory (pages) for a certain number of rows. If a page fills up, it allocates another page. It keeps track of the location in the page for the next entry. Similarly, it keeps track of the location of the first entry on the start page. If you remove the first row, that start location is moved forward on the page. If the start location gets to the end of a page, that page is dropped and the start location moves to the next page. [End Technical Discussion] The Process Flow maintains the history table. Whenever the object's state changes, the Process Flow adds a new entry to the history table. This part is straightforward. The tricky part is properly "expiring" the data. To do this, a second set of tokens wait for the last of the oldest data to expire. When the oldest set of data should expire, the tokens remove any rows that are too old and then wait for the next oldest row. If there are no expired rows, the tokens wait for one "time interval" and then check again. This is because if there is no expired data, data can't expire for at least one time interval. In this way, the history table is always kept up to date. The Statistics Collector is configured to post-process the history table. It calculates the total utilization, including the state that the object is currently in, as well as the state being "phased out" from the history table. The Statistics Collector can do this calculation at any point in the model. So the sampling interval (called Resolution in the model) is independent from the time window.
View full article
Good practice to reduce variance when experimenting is to separate streams of things that might vary in the model so that the random sampling is independent. An example might be that you have a number of processors who are members of the same breakdown profile (MTBF/MTTR object) where the individual breakdowns are dependant on the state of the processor. If during one scenario a processor is used more than before then it may sample the duration and next breakdown earlier, and therefore change the sequence with the other machines sampling of breakdown times, increasing variance. This is because the default setting for the MTBF time fields are using 'getstream(current)' - which means a single stream for the MTBF object, shared across all members. You could try to change this in the MTBF by using 'getstream(involved)' where 'involved' refers to the breakdown member machine. This causes other problems since if you're sampling processing times using the machine's stream too, then the amount of items processed will again change the breakdown times samples. You may judge this to be acceptable, but in a ideal world you'd still want separate streams and may want multiple streams for setup, processing, breakdowns, or subsystem failures. One way to accomplish this is by changing the way getstream() works such that it can generate a stream for any value you pass to it. That might be an object, as the current getstream() accepts, or it could be the string name of the object or it's path. It could also be an array which then opens a number of possibilities: In a breakdown you could replace getstream(current) with getStream([current,involved])   //generates a unique stream number for the MTBF/machine pair* In an Object Process Flow you could replace getstream(activity) with: getStream([current,activity]) // generates a unique stream for the instance and activity pair and works for the general process flow too. For a processing time on a processor instead of getstream(current) you could use getstream([current,"Processing"]) and getstream([current,"Setup"]) to generate two seperate sampling streams. The attached library contains an auto-installing user command that overrides getstream() to provide this functionality. The stream values save with the model. getStream-byvariant3.fsl * This implementation does have some limitations since during an experiment it does not communicate back the master model when trying to create new streams. For this reason you'll want to try and have all possible streams set up before running an experiment or avoid the type of actions that dynamically create the requirement for new streams - so that might be keeping all possible fixed resources and task executers, and hiding/removing them from groups rather than destroying them as the OnSet options of the parameters table do currently. Alternatively if you consistently name the dynamically created instances then the MTBF stream expression could be: getStream([current, involved.name]) Update: I've edited this post and library to use getStream (capital 'S') since the override parameter (var thing) doesn't stay in place and eventually causes FlexScript build errors. So with the updated library you'll need to find/replace from the model tree 'getstream' with 'getStream'.
View full article
This model contains example subflows for having a team of travellers travelling and loading/unloading. The RunSubflow for each creates that subflow's expected labels: team, teamItem, teamDesintation. Currently the offset locations are just a multiple of the x size of the first traveller. They are determined from the pick offset and place offset functions. Further work could add traveller un/loading and wait states and combine travel and unload tasks. TeamSubflows.fsl TeamSubflows.fsm Update 24Oct.'24 : Corrected incorrect label name in TeamTravel subflow should be looking for 'team', not 'travellers'
View full article
Modeling LWBS patients is tricky business and can cause exception errors under certain circumstances if not done correctly. The problems usually arise due to patients who exit the model early while there are pending requests queued up in the model for future activities on the patient. The attached model demonstrates the current best practice for safely modeling LWBS patients without the possibility of generating unwanted errors. The modeling technique is very simply. Use the On Entry trigger of the waiting room object(s) to send a delayed message to itself in X minutes, where X is a sample time from an "impatience curve" representing the amount of time a typical patient is willing to wait before they strongly consider leaving. In the On Message trigger of the waiting room object(s), I have written a code snippet that you will want to copy and modify to suit your own modeling requirement. The code snippet in the example model checks to make sure the patient is still in the waiting room waiting for an exam room at the end of the "impatience time" when the message trigger fires. Then I roll the dice using a bernoulli distribution to decide whether or not to have the patient actually leave. The code snippet shows a different probability for each of two patient types (PCI's) and then a default probability of 50 percent for anyone else. Not only does this modeling approach avoid undesirable exception errors, but it is also more accurate and definitely more efficient than using the Quick Properties fields in the Patient Condition panel of a waiting room which requires repetitive function calls every so many minutes throughout the model run!
View full article
The attached model provides an example of how to record and display overtime hours worked by staff in a small clinic. The following screen capture of the model after a 30 day run shows the various dashboards displaying information about the working hours of the staff in the model. The code snippet shown for the Activity Finished Trigger of the final exit activity in the patient track is used to record information on a couple global variables and then trigger a recording event on the OT_DataCollector. The code snippet is documented in detail, so hopefully you will understand what is being done. As you can see by the Data Collector properties window, nothing special is going on there except to record the two global variables named GV_Overtime_Hours and GV_TotWork_Hours. The third and final piece to the puzzle is to create some User Defined dashboard widgets to display the information captured by the data collector in a few different ways. clinic-overtime-example-with-custom-data-collector.fsm
View full article
Top Contributors