FlexSim Knowledge Base
Announcements, articles, and guides to help you take your simulations to the next level.
Sort by:
In this article, you will learn how to export data from individual replications to an external database. This is an advanced tutorial, and assumes some exposure to databases and SQL. It also assumes that you are familiar with the Experimenter and/or Optimizer feature in FlexSim. This tutorial uses PostgreSQL, but should be compatible with the most popular SQL database engines. Model Description Let's assume you have a model with at least one Statistics Collector, as well as an experiment ready to go. The model used in this example is fairly simple: As the model runs, a Statistics Collector fills in a table. The table makes a new row for every item that goes into the Sink, recording the time, and the Type label of the item: For the experiment, we don't have any variables, but we do have one Performance Measure: Input of the Sink. Of course, an actual model would have many Variables, Scenarios, and Performance Measures, but this model leaves out those details. For reasons which come up later, this model also has two global variables, called g_scenario and g_replication. We will use these during the experiment phase. Connecting to the Database To connect to a database, you can use the Database Connector tool. Each Database Connector handles the connection to a single database. If you need to connect to more than one database, you will need more than one Database Connector. For this model, I have added a Database Connector, and configured the connection tab as follows: This configuration allows me to connect to a PostgreSQL database called "flexsim_test" running on my computer at port 9001. You can test the connection by clicking the "Test Connection" button. The test attempts to connect, as well as query the list of tables found in the database. Setting Up the Export Next, take a look at the Export tab: This tab specifies that we are exporting data from StatisticsCollector1 to the table called experiment_results in the database (this table should exist before you set up the export). The Append to Table box means that when the export occurs, the data will be added to the table. Otherwise, the data would be cleared from the target table. The other interesting thing here is that we are exporting more columns than the Statistics Collector has, namely the Scenario and Replication number. However, the expression in the From FlexSim Column column must be valid FlexQL (FlexSim's internal SQL language). Wrapping the values in Math.floor() leaves the values unchanged, and works well with FlexQL. Setting up the Experimenter Finally, let's take a look at just a little bit of code that makes the Experimenter dump data to a database. There is code in Start of Experiment, Start of Replication, and End of Replication: In the Start of Experiment, we need to clear the table. The code in this trigger connects to the database, runs the necessary query, and the closes the connection to the database. In the Start of Replication trigger, the code simply copies the replication and scenario values into the global variables we created for this purpose. In the End of Replication trigger, the code uses function_s to call "exportAll" on the database connector that we created. This uses the settings on the Export tab to dump the data from StatisticsCollector1 to the database table, including the replication and scenario columns. These same triggers run during an optimization, so the same logic will apply. Run the Experiment Finally, you can run the experiment. When each replication ends, the Statistics Collector data will be exported to the database. Databases can handle many connections simultaneously. This is important, because the child processes that run replications will all open individual connections to the database at the end of each replication. This leads to many concurrent connections, which most databases are designed to handle. But how can we tell that it worked? If you connect to the database with another tool, you can see the result table. Note that there are 1700 rows, and that data from Replication 5 is included. Additionally, you could use the Import tab to import all the data, or summary of the data data, into FlexSim. Conclusion Storing data in a database during an experiment is not difficult to do. There are many excellent tools for analyzing and visualizing database tables, which you could then use to further explore and understand your system. postgresqlexperimentdemo.fsm
View full article
Tokens and Flow Items can be very difficult to add to a chart. This is true because they don't exist on Reset, making them difficult to select. This article shows how you can use a Process Flow to allow a Statistics Collector to record a token's changing label value, and also to chart that value over time: The model for this example is attached (graphlabeldata.fsm). It is a very simple model: The Scheduled Source creates three tokens, each of which create a label called data. This label is created by choosing "Add Tracked Variable" for the value, which opens this dialog box: The reason we want the label to be a Tracked Variable is that Tracked Variables emit an OnChange event. We want to listen to that event. If you use the time interval collection method, discussed later in this article, you don't need to make the label a Tracked Variable. Each token then goes through a loop, where it waits, and then updates the value. This is meant to represent a much more complicated model, where the token travels through many activities, any of which could change its label value. For this example, the model randomly changes the value on the label. Now that we have a token and a label whose value is changing as the model runs, we can work on making a chart. We want to eventually make a Statistics Collector, but Statistics Collectors can only listen to events of objects that exist after the model has been reset. Tokens and FlowItems (along with their labels) are destroyed on reset, and so we can't listen directly to them. However, some Process Flow activities can listen to events on tokens and flowitems, and the Statistics Collector can listen to those events. For that reason, we make a second Process Flow: This flow has an Event-Triggered Source, which listens for tokens to leave the "Init Tracked Variable" activity in the first flow. When that happens, the source creates a token, and that token immediately gets a reference to the label node (note that this is different than the value of the label node). Next, the new token goes to a Start activity. The Start activity called "Log Change." This activity is just a placeholder. While you could technically live without this activity, it makes things a little clearer, as we will discuss later. Other than providing OnEntry and OnExit events, the Start activity has no internal logic whatsoever. After passing through the Log Change activity, the new token waits for the label value to change: In order to listen to this event, you can first sample a Tracked Variable in the Toolbox. This provides the OnChange event. Then you can update the Object field to the code shown above. Notice that every time this event happens, the token simply passes through the Log Change activity, and then resumes listening. When the original label value changes, it emits an OnChange event. When that event fires, the token listening to that even travels through the Log Change activity, which emits OnEntry and OnExit events. We can use these events in the Statistics Collector. The key to this technique is that we used Process Flow, which is good at listening to token and flowitem events, to generate Activity events, which can be used in the Statistics Collector. In the attached model, the first Statistics Collector is configured like this: It simply listens to the On Entry of the Log Change activity. The columns are defined as follows: The first two columns are simpler; the Time column uses the Model Date/Time option: The second column gets the ID of the token as an integer: The third column gets the current value of the Data label: Now that the Statistics Collector is set up, we can configure the chart to use this collector, and split by the Token ID. The process to record the label value every interval (rather than on every change) is very similar. The downside is that the data is less granular, but the upside is that a label doesn't have to be a Tracked Variable to be charted. The example model simply uses a Split activity to copy the data from the Event Triggered Source, and sends it to a similar listening loop: Instead of waiting for the value to change, the second token waits for a fixed time interval. A similar Statistics Collector will allow you to create the following chart: This approach works for every token created by the scheduled source. No matter how many tokens you create, each will show up on the chart:
View full article
In addition to the animations that are on the Operator and Person flowitem by default, you can easily download and add more animations from Mixamo. Below are the steps to do that. If you want to create new characters with animations, see Bone Animations 1. Download the attached two files: FlexSim_Operator.fbx and FlexSim_F_Operator.fbx. These are the versions of the male and female operator shapes as exported from Mixamo originally, before they were modified and optimized in 3ds Max. 2. Log into your account at https://www.mixamo.com and press the Upload Character button in the Characters section of the site. 3. Drag the FlexSim_Operator.fbx or FlexSim_F_Operator.fbx shape from step 1 onto the window that appears. A progress bar will appear as the shape uploads to Mixamo's server. Once it is finished uploading, press the Next button: 4. On the Animations section of the site, select an animation you want to apply. Adjust the parameters as desired and press the Download button: 5. Select "Without Skin" in the Skin section and press the Download button. This will download just the animation file rather than the animation, the mesh, and the bones. The mesh and bones are already in the software so you only need the animation if you are editing the standard male or female operator shapes. 6. In FlexSim, edit the Animations for an Operator or a Person flowitem: 7. In the Animations and Components window, select Edit Animation Clips: 8. In the Animation Clips window, press the Plus button to add the animation you downloaded in step 5 above. Optionally, you can edit the animation clip's name and press Apply. You can also edit the clip's length or split the animation into multiple clips using this window. 9. Close the Animation Clips window when you are done. 10. In the Animations and Components window, add a new Animation and name it. Then add an Animation Clip to that animation. Then select the clip and set its Animation and Clip values to the animation/clip set in step 8: 11. Now you can use the new animation in your model on this operator or person flowitem.
View full article
FlexSim 2018 includes functionality for creating a custom table view GUI using a custom data source from within FlexSim. This article will go through the specifics of how to set this up. The Callbacks Custom Table Data Source defines callbacks for how many rows and columns a table should display, what text to display in each of the cells, if they're read only etc. If you require further control of your table you can use a DLL or module and sub class the table view data source in C++. An example of how this data source is used can be seen in the Date Time Source activity properties. In the above table, the data being used to display both of these tables is exactly the same. The raw treenode table data can be seen on the right. The table on the left is displaying the hours and minutes for each start and end time rather than the the start time and duration in seconds. In FlexSim 2018, there are a number of tables that currently utilize this new data source. They can be found at: VIEW:/modules/ProcessFlow/windows/DateTimeArrivals/Table/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/StatePieOptions/SplitterPane/States/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/CompositeStatePieOptions/SplitterPane/States/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/StateBarOptions/SplitterPane/States/Table Copying one of these tables can be a good starting point for defining your own table. The first thing you have to do in order for FlexSim to recognize that your table is using a custom data source is to add a subnode to the style attribute of your table GUI. The node's name must be FS_CUSTOM_TABLE_VIEW_DATA_SOURCE with a string value of Callbacks. Set your viewfocus attribute to be the path to your data. This may just be a variable within your table. It's up to you to define what data will be displayed. If you want to have the default functionality of the Global Table View, you can use the guifocusclass attribute to reference the TableView class. This gives you features like right click menus, support for cells with pointer data (displays sampler), tracked variables (displays edit button) and FlexScript nodes (displays code edit button). If you're going to use this guiclass, be sure to reference the How To node (located directly above the TableView class in the tree) for which eventfunctions and variables you can or should define. At this point you're ready to add event functions to your table. There are only two required event functions. The others are optional and allow you to override the default functionality of the table view. If you choose not to implement the optional callback functions, the table view will perform its default behavior (whether that's displaying the cell's text value, setting a cell's value, etc). These event function nodes must be toggled as FlexScript. The following event functions are required: ---getNumRows--- This function must return the number of rows to display in the table. 0 is a valid return value. param(1) - View focus node ---getNumCols--- This function must return the number of columns to display in the table. 0 is a valid return value. param(1) - View focus node The following event functions are optional: ---shouldDrawRowHeaders--- If this returns 1, then the row headers will be drawn. The header row uses column 0 in the callback functions. param(1) - View focus node ---shouldDrawColHeaders--- If this returns 1, then the column headers will be drawn. The header column uses row 0 in the callback functions. param(1) - View focus node ---shouldGetCellNode--- Allows you to decide whether you want to override the table view's default functionality of getting the cell node. param(1) - The row number of the displayed table param(2) - The column number of the displayed table If shouldGetCellNode returns 1 then the following function will be called: ---getCellNode--- Return the node associated with the row and column of the table. param(1) - The row number of the displayed table param(2) - The column number of the displayed table ---shouldSetCellValue--- Allows you to decide whether you want to override the table view's default functionality of setting a cell's value. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldSetCellValue returns 1 then the following function will be called: ---setCellValue--- Here you can set your data based upon the value entered by the user. param(1) - The row number of the displayed table param(2) - The column number of the displayed table param(3) - The value to set the cell ---isCustomFormat--- Allows you to decide whether you should define a custom text format for the cell. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If isCustomFormat returns 1 then the following functions will be called: ---getTextColor--- Return an array of RGB components that will define the color of the text. Each component should be a number between 0 and 255. [R, G, B] for example red is [255, 0, 0]. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The text to be displayed in the cell param(5) - The desired number precision ---getTextFormat--- Return 0 for left align, 1 for center align and 2 for right align. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The text to be displayed in the cell param(5) - The desired number precision ---shouldGetCellColor--- Allows you to decide whether you should define a color for the cell's background. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldGetCellColor returns 1 then the following function will be called: ---getCellColor--- Return an array of RGB components that will define the cell's background color. Each component should be a number between 0 and 255. [R, G, B] for example red is [255, 0, 0]. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table ---shouldGetCellText--- Allows you to decide whether you want to override the table view's default functionality of getting a cell's text. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldGetCellText returns 1 then the following function will be called: ---getCellText--- Return a string that is the text to display in the cell. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The desired number precision param(5) - 1 if the cell is being edited, 0 otherwise ---shouldGetTooltip--- Allows you to decide whether a tooltip should be displayed when the user selects a cell in the table. param(1) - The cell node as defined by getCellNode param(2) - The row number of the selected cell param(3) - The column number of the selected cell If shouldGetTooltip returns 1 then the following function will be called: ---getTooltip--- Return a string that is the text to display in the tooltip. param(1) - The cell node as defined by getCellNode param(2) - The row number of the selected cell param(3) - The column number of the selected cell ---isReadOnly--- Return a 1 to make the cell read only. Return a 0 to allow the user to edit the cell. param(1) - The row number of the displayed table param(2) - The column number of the displayed table
View full article
The attached model provides an example of how to record and display overtime hours worked by staff in a small clinic. The following screen capture of the model after a 30 day run shows the various dashboards displaying information about the working hours of the staff in the model. The code snippet shown for the Activity Finished Trigger of the final exit activity in the patient track is used to record information on a couple global variables and then trigger a recording event on the OT_DataCollector. The code snippet is documented in detail, so hopefully you will understand what is being done. As you can see by the Data Collector properties window, nothing special is going on there except to record the two global variables named GV_Overtime_Hours and GV_TotWork_Hours. The third and final piece to the puzzle is to create some User Defined dashboard widgets to display the information captured by the data collector in a few different ways. clinic-overtime-example-with-custom-data-collector.fsm
View full article
In version 2018 and forward, you can make this chart using a Chart Template. You can simply drag and drop the chart from the dashboard library. This article may help you understand how the chart template works. You can use the Install button on the chart template to view the Process Flow, Statistics Collector, and Calculated Table that make the chart. This article reviews how to use the Zone, along with the Statistics Collector, to create a bar chart of the current work-in-progress (WIP) by item type. The method scales to as many types as you need (this example uses 30 types), and can easily adapted to text data, like SKU. An example model ( zonecontentdemo.fsm) demonstrates this method. Creating the Process Flow To create this chart, we first need to gather the data for this chart. In this case, it is easiest to build off the capabilities of the Zone. In particular, we will use Zone Partitions to categorize all of our object. After we set up the Zone, we'll use a Statistics Collector to gather the data that we need. Create a new General Process Flow. You can have as many General Process Flow objects as you want, so let's one that just deals with gathering statistics. This way, gathering statistics will not interfere with the logic in our model. The process flow should look something like this: Here's how it works. The Listen to Entry is configured to listen to a group of objects. In this case, the group contains all the sources in the model, and it's listening to the OnExit of the sources. However, it could be OnEntry or OnExit of any group of 3D objects. If you want to split the statistics by Type or SKU, then any flowitem that reaches the entry group already has the appropriate labels. In this example model, when a flowitem leaves any source, a token gets created. The token makes a label called Item that stores a reference to the created item, as shown in the following picture. The next step is to link the flowitem with the token that represents it. The Link Token to Item is configured like this: Now, the Item has a label that links back to the token. The token then enters a zone. The Zone is partitioned by type: At last, the token comes to a decide activity. The decide is configured not to release the token. The token will be released by the second part of the flow. Once the token is released, it exits the zone, and goes to a sink. The second part of the flow also has an event-triggered source, that is configured to listen to all the sinks in the model. Again, the entry objects and exit objects are arbitrary; you can gather data for the entire model, or for just a small section of the model, using this method. The event triggered source also caches off the item in a label. At this point, we need to release the token that was created when items entered the system. To do this, the Release Token activity is configured as seen here: The token created on the exit side has a reference to the item, which has a reference back to the token created on the entry side. We release this token to 1, which means connector 1. Note: We could have used a wait for event activity in the zone, and then used the match label option to wait for the correct item to leave the system. However, this method is much, much faster, especially as the number of tokens grows. Creating the Bar Chart Statistics Collector The next step is to create a statistics collector that gathers data appropriate for a bar chart. Note that this method will grow the number of rows dynamically, so that it won't matter how many types (or SKUs) your model has; you will still get one bar per type/SKU. In order to make the number of rows dynamic, we need to listen to the OnEntry and the OnExit of the zone activity: Notice the shared label on this collector. Because this label is shared, both the OnEntry and OnExit events will create this label on the data object. The value of this label is the item's type. Next, we move to the Data Recording tab. Set the Row Mode to Unique Row Values, and set the row value to Partition. This means that whenever an event fires, the statistics collector will look at the partition label on the data object. If the value is new to the statistics collector, the collector will make a new row for this value. If not, then the collector will use the row that is already present. Finally, we need to make our columns. We only need two columns: one for the Partition, and one for the Content of that partition. Both of these columns can use the Integer storage type, and raw display format. However, if your partition value was text-based, like an SKU, you should use the String storage type. The value for Partition is just the data object's Partition label: Notice that the Update option is set to When Row is Added. This way, the statistics collector knows that this value will not change, and that it's available at the time the row is created. The other column is a little harder, because we need to use the getstat command: The getstat command arguments depend on the stat you are trying to get. In this case, we are asking the zone (current, the event node) for the Partition Content statistic. We want the current value. Since this is a process flow activity, we pass in the instance as the next argument. Finally, we pass in which partition we want to get the data from, the row value. In this case, we could have identically passed in data.Partition. Also, notice that this column is updated by event dependency. To make sure this does what we want, we need to edit the event/column dependency table. We want the Content column to be updated when items enter and exit, so it should look like this: Now, open the table for the statistics collector. You should see two columns. When you run the model, rows will be added as items of different types are encountered. The table will look something like this: This screenshot came from early in the model, before all 30 types of item has been encountered, so it doesn't have 30 rows yet. Making the Bar Chart This is the easiest part. Create a new dashboard, and add a new bar chart. Point the chart at the statistics collector. For the Bar Title option, choose the Partition column. Be sure to include the Content column. Also, make sure that the "Show Percentages" checkbox on the Settings tab is cleared. The settings should look like this: The resulting chart looks something like the following image. You can set the color on the Colors tab. Ordering the Data Because the rows of this table are created dynamically, the order of the rows will likely change run to run. To force an ordering, you can use a calculated table. Since the number of rows on this table don't grow indefinitely, and the number is relatively small, it's okay to set the Update Mode on the calculated Table to always. Here's what the properties of that calculated table look like: We simply select all columns from the target collector (CurrentContent, in this case) and order it by the Partition column. That yields an ordered bar chart: Example and Additional Charts The attached example model demonstrates this method, as well as how to create a WIP By Type vs Time chart: Happy data collecting! zonecontentdemo.fsm
View full article
This article reviews one method for making a state Gantt chart for the default and alternate state profiles: Example Model You can download the model for this walkthrough ( stateganttdemo.fsm). The model has two multiprocessors, in a Group called Multiprocessors. Each multiprocessor has two processes: Process1 and Process2. To make the chart, we will first make a Statistics Collector, and then a Calculated Table. Making the Statistics Collector Make a new Statistics Collector. On the Event Listening tab, use the Sampler to listen to On State Change of the group of multiprocessors. You can leave the parameter names alone. However, we need to add a label, so we can record the profile number. Select the new event, and then use the green plus button in the Event Labels area to add a label for this event. Set its name to ProfileNum, and its value to the following code: data.StateProfileNode?.rank The event settings should look something like the following: Next we need to set the row mode. Make sure it's set to Add Per Event, with no row value. As the final configuration step for the statistics collector, we need to set up the columns. There should be four columns in this collector: Time - In the pick options, select Time, then Model Date/Time Object - In the pick options, select IDs, then ID of Event Object Profile - Type data.ProfileNum for the value. The default storage and display format are fine. State Type the following code: data.eventNode.as(Object).stats.state(data.ProfileNum).profile[data.ToState + 1][1] Set the Storage Type to String The code is necessary because On State Change occurs before the state is set to the new state. So the code is looking up the name of the future state in the profile table. When you reset and run this model, you will see a table like the following: Making the Calculated Table Make a new Calculated Table, and give it the following query: SELECT Object, Time as StartTime, LEAD(Time) OVER (PARTITION BY Object) AS EndTime, State FROM StatisticsCollector1 WHERE Profile = 1 This query creates an Object column as well as a Time column. To get the time that the current state ends, we look to when the next state begins. The LEAD() function looks ahead in the table, and the OVER(PARTITION BY Object) clause makes sure that LEAD() makes sure to look to the next row with the same Object. We also record the state column, and filter out the standard state profile, keeping the special multiprocessor state profile. Once you get this query to work, change the Update Mode to By Interval, and set the interval to 20 or 30. Since the Statistics Collector table will get longer and longer, the query will become more and more expensive as the model runs. To control how much time is spent running the query, we use an interval. The final configuration of the Calculated Table should look like this: You will need to set the Display Format of each column on the Display Format tab (Object, Date/Time, Date/Time, and Raw). Making the Chart Make a new dashboard, and create a Gantt chart. Point it at the Calculated Table. When you do that, the chart should fill in all the other columns correctly. Charting Both State Profiles for Both Objects In order to chart both profiles on the same chart, we first need to add a column to the Statistics Collector, and then update the query in the Calculated Table. The new column should be named ObjectAndProfile, and a Storage Type of String. Use the following code for a value: data.eventNode.name + " - " + string.fromNum(data.ProfileNum.as(int)) Then change your query to the following: SELECT ObjectAndProfile, Time as StartTime, LEAD(Time) OVER (PARTITION BY ObjectAndProfile) AS EndTime, State FROM StatisticsCollector1 With these changes, you should be able to view both profiles for both multiprocessors.
View full article
This article demonstrates how to use the Statistics Collector and Calculated Tables to create three utilization pie charts: a state pie chart an individual utilization pie chart a group utilization pie chart Example Model You can download that model (utilizationdemo.fsm) to see the working demonstration. The model has a Source, a Processor, a Sink, a Dispatcher, and several operators. The operators carry flow items to and from the processor, as well as operate the processor. The operators are in a group called Operators. State Pie Chart First, we need to make a Statistics Collector that collects state data for the operators. The easiest way to do that is to use the pin button to pin the State statistic for any object in the model. Use the pin button to pin a pie chart to a new dashboard. The pin button creates a new Statistics Collector, as well as a new chart. Open the properties for new Statistics Collector (double click on it in the toolbox), and change its name to OperatorStatePie. On the Data Recording tab, remove the object from the Enumerated Rows table. Using the sampler, add the Operators group (you can sample it in the toolbox). Now, when you reset and run the model, the state chart should work. Utilization Pie Chart Often, users need to combine sever states into a single value that can be used to determine the utilization of an object in the model. In order to gather this data, we can use a calculated table. Make a new Calculated Table, and give it the following query: SELECT Object, (TravelEmpty + TravelLoaded + Utilize) / Model.statisticalTime AS Busy, 1 - Busy AS NotBusy FROM OperatorStatePie This query sums the time in several states into a total, and then divides by the statistical time. Be sure to set the name of the table (the part after FROM) to the name of your Statistics Collector. Run the model for a little bit of time, and then click the Update button on the properties window. You should get a table like the following: Instead of viewing the data as just numbers, change the Display Format of each column to better represent the data. On the Display Format tab, set the Object column to display Object data. Then set the other two columns to display percentages. When you switch back to the Calculations tab, the data will be formatted: Once we get the query right, set the update mode to Always. This will updated the data in the table whenever the data is needed, including every time the chart draws. If updating the table is computationally expensive, you can use the By Interval or Manual options. Generally, a small number of rows (1-100) is small enough to use the Always mode. Regardless of the update mode, we can make a chart based on this table. In the dashboard, create a new Pie Chart. For the Data Source, select the calculated table. For the Pie Title, select the Object column. For the Center Data, select the Busy column. Be sure to include the Busy and NotBusy columns. This should show you a pie chart, comparing the operator's busy and not busy time. Group Utilization To make the final utilization chart, make a second calculated table. The query for the second table should be as follows: SELECT AVG(Busy) AS AvgBusy, AVG(NotBusy) AS AvgNotBusy FROM CalculatedTable1 Again, use the Update button to be sure the query is correct. Once it is, set the update mode to Always. Finally, you can make the pie chart for this data: Things to Try If you feel comfortable with this model, you can try a couple extra tasks, such as: Remove one of the operators from the group, reset, and run. The charts will update accordingly. Add the Processor to the group, reset, and run. The state chart should work automatically.
View full article
In version 2018 and on, you can make this chart by dragging the Throughput Per Hour by Type template from the dashboard library. If you install the template (available on the Advanced tab), you will see a Process Flow and a Statistics Collector appear in your toolbox. One of the most common questions from FlexSim users is as follows: How do I make a chart that shows the output every hour? You can make this chart in three steps. Configure the Statistics Collector First, you need a Statistics Collector. Make a new one in the toolbox (click the green plus button, select Statistics, and then select Statistics Collector). On the event listening tab, use the green plus button to add a timer event, and configure as shown here: This timer event will fire every hour (every 3600 seconds) in the model. Notice the shared label, that is storing all members of the Processors group as an array. We will use this label in the next step. Once you have configured the timer, then you need to set up the row mode for this collector. We want one row per processor, and we need to use the Processors label as the row value. Since the Processors label is an array, we will get three rows per timer event, each row corresponding to a processor. Finally, we can add the columns. The three columns are as follows: Time - use the pick list to select Model Date/Time from the Time menu Object - use the pick list to select ID of row value from the IDs menu Output - use the pick list to select Statistic by Object from the Object Statistics menu Use data.rowValue as the object value in the popup If you use the pick options to choose these options, then the storage type and display format options should be set automatically. With these three columns in place, we can watch the table populate. Reset and run the model at high speed. Every model hour, you should see a new set of rows appear, one for each processor in the group. The table will look something like this: Configure the Calculated Table The Statistics Collector table from the previous steps is close to what we want, except that the output value always increases as the model runs. But what about the output for just a single hour? To get that value, we can use a Calculated Table. Make a new calculated table, and give it the following query (in the Query field): SELECT Time, Object, ISNULL(Output - LAG(Output) OVER (PARTITION BY Object), 0) AS OutputPerHour FROM StatisticsCollector1 This query uses SQL window functions. Basically, it says that each row's value should subtract the previous row's value for the object. In addition, if that value is NULL (because it's the first row), then just use a value. If you reset and run the model, so that the collector table has at least a few rows in it, click the Update button to run the query. Notice that the Time and Object columns show numbers. This is because the Calculated Table can't infer the formatting of the column. To set the formatting, use the Display Format Tab. You may also wish the table to update every hour, with the Statistics Collector. Make the Chart Now that our data is correct, we can make a chart. Make a new dashboard, and create a Time Plot chart. Point the chart to the calculated table. Let's use the Time column for the X values, and let's use the OutputPerHour column for the Y values. In addition, make sure to split by the Object column. If the calculated table updates every hour, then running the model should create the chart shown at the beginning of the model. Here is the model used to create this chart (should work in 2017 Update 2 Beta or later; beta must be built on or after August 21, 2017). outputperhourdemo.fsm
View full article
In the last little while we've had several questions regarding Picking Line operations. (How to make one, how to simplify, how to set up one using Process Flow & Lists, etc.) So I've put together a very simple little model utilizing Process Flow and a basic List structure, that shows a basic picking line. It creates orders, and sends them by tote via a conveyor, and Operators will pick items from their respective stations to fulfill the orders into the tote, before sending it to Checking and Shipping. Note that this could easily be adapted into an assembly line situation as well. pickingstations.fsm
View full article
Quando desenvolvemos um modelo de simulação que possua diversas ou até mesmo elevado número de variáveis de entrada, faz-se necessário analisar diversos cenários dentro de um simulador para termos o maior número de detalhes possíveis com relação ao comportamento do sistema. No FlexSim, o recurso que permite esse tipo de análise é o Experimenter. Com ele, podemos estimar como as variáveis de entrada, afetam as respostas de um experimento e pode-se planejar de forma racional os cenários a serem executados. O vídeo explicando um pouco sobre experimentos e também falando sobre o Experimenter, encontra-se em nosso Canal no Youtube.
View full article
Modeling LWBS patients is tricky business and can cause exception errors under certain circumstances if not done correctly. The problems usually arise due to patients who exit the model early while there are pending requests queued up in the model for future activities on the patient. The attached model demonstrates the current best practice for safely modeling LWBS patients without the possibility of generating unwanted errors. The modeling technique is very simply. Use the On Entry trigger of the waiting room object(s) to send a delayed message to itself in X minutes, where X is a sample time from an "impatience curve" representing the amount of time a typical patient is willing to wait before they strongly consider leaving. In the On Message trigger of the waiting room object(s), I have written a code snippet that you will want to copy and modify to suit your own modeling requirement. The code snippet in the example model checks to make sure the patient is still in the waiting room waiting for an exam room at the end of the "impatience time" when the message trigger fires. Then I roll the dice using a bernoulli distribution to decide whether or not to have the patient actually leave. The code snippet shows a different probability for each of two patient types (PCI's) and then a default probability of 50 percent for anyone else. Not only does this modeling approach avoid undesirable exception errors, but it is also more accurate and definitely more efficient than using the Quick Properties fields in the Patient Condition panel of a waiting room which requires repetitive function calls every so many minutes throughout the model run!
View full article
Rápida demonstração de como usar o Graphical User Interface para controlar todo o modelo de simulação, desenvolver um painel de controle ou cockpit para alterar propriedades específicas de objetos dentro do modelo de simulação ou ainda, destruir ou recriar variáveis de objetos de forma a customizar ainda mais seu modelo. Tudo isso usando a estrutura do 'Tree' no FlexSim e também o recurso GUI, disponível no ícone em 'Tools'. Veja em nosso canal da FlexSim Brasil no Youtube.
View full article
Nesse vídeo, fazemos um overview sobre o FlexSim e demonstramos como modelar diretamente pelo 3D usando as conexões padrão disponíveis no software e as diferentes formas de inserir os objetos na área de modelagem (Grid). Também demonstramos como analisar de forma bem básica as estatísticas de cada um dos objetos e uma rápida introdução sobre o Dashboard e a variável estatística presente em cada um dos objetos individualmente. Para assistir o vídeo, acesse nosso canal do youtube.
View full article
Como criar animações customizadas em máquinas ou equipamentos no FlexSim é o que demonstramos nesse rápido vídeo. Customizamos uma animação no objeto processor, criando a movimentação real de uma maquina envolvedora de filme stretch. Veja o tutorial completo de como executar essa tarefa passo a passo, acessando o Canal Youtube da FlexSim Brasil. Ainda neste artigo, estou anexando uma pasta chamada 'Wrapper' com os arquivos em 3D para os interessados que quiserem criar as customizações acessarem e usarem os arquivos. wrapper.raranimationcreator.fsm
View full article
One of the most powerful features of Process Flow is the ability to easily define a Task Sequence. However, many real-life situations require the coordination of multiple workers and machines to do a single task. This article demonstrates one approach you can use with Process Flow to coordinate multiple Task Executers, or in other words, to create a coordinated task sequence. This article talks about an example model (handoff.fsm). It might be easiest to open that model, watch it run, and perhaps read this article with the model open. The Example Scenario Here is a screenshot of the demo model used in this article: Items enter the system on the left. The yellow operator must carry each item to the queue in the middle, and then wait for the purple operator to arrive. Once the two operators are both at the middle queue, then the yellow operator can unload the box, and the purple operator can take it. After this point, the yellow operator is free to load another item from the left queue. The purple operator takes the item, waits for a while, and then puts the item in the sink on the right. The interesting part of this model is the hand off. The yellow operator must wait for the purple operator, and vice versa. This is the synchronization point, and it requires coordination of both operators. The approach used in this model allows you to add more operators to the yellow side, and more to the purple side. But it still maintains that a yellow operator must wait for a purple operator before unloading the box. The Example Model In addition to to the 3D layout shown previously, there are 5 process flows in the example model. The first is a general flow, and defines the logic for each task. The second is a Task Executor flow, and defines the logic for the yellow operator. The third is also a Task Executor flow, and defines the logic for the purple operator. The remaining two flows are synchronization flows, for synchronizing between the other three flows. Synchronizing on a Task The basic approach in this model uses the Synchronize activity. This activity waits for one token from each incoming connector, before it allows any of the tokens to move on. Here is the Yellow Purple Sync flow (a global Sub Flow) from the example model: The flows for the yellow and purple operators each use the Run Sub Flow activity to send a token to this flow, to their respective start activities (you can use the sampler on the Run Sub Flow activity to sample a specific start activity in a sub flow). This is what allows both the yellow and purple operators to wait for each other. However, it is important that the yellow and purple operators are both doing the same task. In this model, there is a token that represents each item that needs to be moved. Both operators get a reference to this task token. The Synchronize activity is set up to partition by that Task token. That means that a yellow and purple operators must both call this sub flow with the same task token, ensuring that each task has its own synchronization. In the example model, this kind of synchronization happens between operators, and it happens between each task and an operator. Basically, the task must wait for the operator to finish that operator's part. The Task Flow A task token is created every time an item enters the first queue. The tasks flow puts that task on both the Yellow and Purple lists. In both cases, the task token does not wait to be pulled, but keeps itself on the list. Then, the task token waits for a yellow operator to finish with it, and then for the purple operator to finish with it. There is a zone in this flow, but its only purpose it to gather statistics for how long the whole task took. The Yellow and Purple Flows These flows are easiest to understand when viewed side by side: Recall that each task is put on both the Yellow and Purple lists at the exact same model time. The yellow operator waits to get a task (at the Get Task activity). Then the operator travels to the first queue, gets the item, and travels to the second queue. At this point, the yellow operator waits. At the same time, the purple operator is also waiting for the task. The purple operator just has to travel to the second queue before waiting for the yellow. Once the yellow operator arrives, the purple operator also has to wait for the yellow operator to unload the box. On the yellow side, once the purple operator arrives, the yellow operator unloads the box, and then synchronizes with the purple operator, allowing the purple operator to load the box. Summary The purpose of this article is to show one method for synchronizing token in separate flows. That method is as follows: Have a token for each task. As each task executor (or fixed resource) needs to synchronize, they each use a Run Sub Flow activity, putting the token in a specific Start activity. The Sub Flow (a global Sub Flow) has a synchronize activity, that requires a token from each participant for that task before releasing the tokens. This is certainly not the only way to create this model. However, there are some advantages: By forcing the task to synchronize, you can gather stats on how long each phase of the task took, as well as how long the complete task took. You can add more yellow or purple operators by copy/paste. They simply follow their own logic Each set of logic is separated; tasks, yellow operators, and purple operators each have their own flows, making each one much simpler. The exact approach used in the example model will not work exactly as it is for each model. However, you can apply the general principles, and adapt them to your own situation.
View full article
In this phase You will be introduced to shared assets, specifically, zones. Post Office: Phase 3 Purpose Learn about Zone Shared Asset Description Restrict access to the waiting line queue so that only 5 Customers can be present at a time. Follow Up What happens to the 'Customers' that show up when the Zone is at capacity? How can we add Customer behavior that says if the waiting line is already full, they leave the post office? Video of Post Office: Phase 3 build: If you'd like to download the completed Post Office: Phase 3 model, you will find it here.
View full article
In this phase You will be introduced to Tasks, Tasksequences and Task Executors. In order to build Phase 2, you will need to start by having Post Office: Phase 1 pulled up in FlexSim. Tasks and Tasksequences Task A single instruction or action to be performed by a TaskExecuter object (ex. LOAD flowitem) Tasksequence A series of tasks to be performed in sequence. (example tasksequence with tasks below) Post Office: Phase 2 Purpose Modify our model to increase visual appeal. Description Create visuals so that the "Customer" flowitem looks and behaves more like a person standing in line, IE is able to walk to each location we send them to. Follow up How can the use of labels and resources help us easily add additional service desks? Video of Post Office: Phase 2 build: If you'd like to download the completed Post Office: Phase 2 model, you will find it here.
View full article
Building a Model In this training section we will introduce to you how to begin a modeling project. To begin, you need to gather all available information that describes the system, from how it looks to how it behaves. To build a model we take an incremental approach to the design and implementation of the model; think layers. We will be building a model in the following 5 steps: Create a 3D layout (this includes a CAD drawing, if available) Create a flowchart of the process steps Define the logic for each step, testing as you go Revise Logical steps as needed Run the model View results Post Office: Phase 1 Purpose Gain familiarity with Abstraction in models, Flowcharting steps, basic ProcessFlow activities Description Customers arrive at a post office every 45 seconds, on average. The statistical probability distribution which best simulates the inter-arrival pattern is an exponential distribution with a location value of 0 and a scale value of 45. The service time at the one and only service window in the post office is lognormal2(45,3.1,0.5) seconds. When the delay for service is complete, Customers leave the post office Follow Up (see video below at time 7:16) If there is more than one Token at the Delay activity, what does that say about how many Customers can be served at once? How can we apply limits to Tokens at activities? Video of Post Office: Phase 1: If you'd like to download the completed Post Office: Phase 1 model, you will find it here.
View full article
In this example model you'll see two identical elevator setups. However, you will notice that ElevatorBank1 allows the patient to move to the next floor properly, whereas ElevatorBank1_2 will float the patient up the network node instead of using the elevator. There are a few steps you must follow to ensure you will have a properly working elevator in your model. Make sure everything is working the way you intended, without an elevator. Now you can add in the elevator, select it and then check the 'Connect to Path' box as seen below Now ensure that the elevator is connected to the nearest path node and any nodes above it. One thing to note is that after resetting if you click on any of the Path nodes that the elevator is connected to you will see that the On Arrival Trigger now says Send Message to Request Elevator. This is the code that actually calls the elevator when a patient arrives at the node. The elevator automatically adds this to the nodes connected to when resetting, but this trigger option can be added to any node. Another good practice, especially if patients walk by the elevator without always using it, is to make a separate node off on a spur. That way patients aren't triggering the elevator every time they walk by.
View full article
Top Contributors