FlexSim Knowledge Base
Announcements, articles, and guides to help you take your simulations to the next level.
Sort by:
If you've ever tried to nest groups of objects inside a hierarchy of planes, you may find the drawing of the planes suboptimal and lacking information: Using the container (modified plane) in the attached user library you can represent the container with just an outline. A settings dashboard is installed with the library along with some user commands and global variables. Corner prisms show the nesting layers under the prism: The option 'Use the container center' allows you to use it as either a plane as before or, when unselected, a bordered frame where dropping an object or clicking and dragging within the borders will behave as though you are dropping onto or clicking/dragging the model floor. You can also choose to hide the containers entirely for the cleanest visuals. I hope this will encourage users to use containers more, since when coupled with Templates and Object Process Flows they can increase the scaleability and make your developed assets more manageable. ( In those cases the container becomes the member instance of the process flow or template master and references to its components are made through pointer labels on the container rather than names which you may want to alter for reporting purposes. The pointer labels are updated automatically when creating a new instance of the container.) ContainerMarkers_v1.fsl If you want planes you already have in your model to adopt this style just add this to their draw code: return containerdraw(view,current);
View full article
This article describes an example of Reinforcement Learning being used to solve scheduling problems. See the model and python files in the attached zip file. SchedulingRL.zip Problem Description This model represents a generic sheet metal processing plant. There are four machines in series. Each job requires time on all four machines. Jobs come in batches of 10. A poor sequence of jobs will cause blocking between items, lowering throughput. If the time between batches is long, such as a shift or a day, you could use the optimizer to determine the best sequence. If the time between batches is short, however, the optimizer may not be feasible. For real sequencing problems, the time to find a good sequence can be anywhere from 5 minutes to an hour, or even longer. This makes it impractical for high-velocity situations. The attached model requests a decision every time the first machine in the series is available. The only action is an index for the Nth available job. So the decision can be interpreted as "which job should I do next?" Solution The general solution is to use reinforcement learning. However, this problem required customized python scripts: The model uses custom parameters for observations. This allows arbitrary values for observations. The model uses a custom observation space. The observations include a table of the required times at each station for the remaining jobs. They also include an array of the in-progress jobs and their predicted remaining times. By using a Dict space, the python scripts can combine all the observations into a single space. The model uses an Action Mask. An Action Mask is a binary array with one value per value of the action. This tells the RL algorithm about invalid options. The python scripts require the sb3-contrib package. Use pip install sb3-contrib to install it. Results After training for 500k time-steps, the agent learns to choose jobs moderately well. If you run the inference script, you can use the experimenter to compare a random policy to a trained agent:
View full article
What is ODBC? FlexSim can use ODBC, and ODBC can connect to many different kinds of data sources, including files (like Excel) and databases. It allows you to use SQL queries to get data from any supported data source. ODBC uses a driver determine how to get data from a given data source. A driver is translator between ODBC and whatever data source you are querying. For example, there is a driver for Excel files. If you have that driver on your system, you could use ODBC to query data from Excel files. As another example, there is a driver for SAP HANA, the database for SAP. If you have that driver, then ODBC will know how to talk to SAP. This means that if the correct driver is installed, and is working correctly, that you can use FlexSim to query any ODBC data source. Using the Database Connector FlexSim has a tool called the Database Connector. You can use it to configure a connection to a database. This article uses that tool. For more information, see https://docs.flexsim.com/en/20.2/Reference/Tools/DatabaseConnectors/ As an example, let's say that you want to connect to Excel using ODBC. Assuming you have the Excel ODBC driver installed on your system, you can configure a Database Connector to look like this: Note that you must specify the full connection string. In the connection string, you can see that the driver and the file are specified. Then you can query the data in a given sheet with a query like this: SELECT * FROM [Sheet1$] You can find more information on querying Excel files at websites like this: https://querysurge.zendesk.com/hc/en-us/articles/205766136-Writing-SQL-Queries-against-Excel-files-Excel-SQL- If you use Office 365, you may need to install the Microsoft Access Database Engine 2016 Redistributable. This includes newer drivers for Excel and Access Be sure to install it with the /quiet flag on the command line. Instructions can be found in this troubleshooting guide: https://docs.microsoft.com/en-us/office/troubleshoot/access/cannot-use-odbc-or-oledb Note that FlexSim has an Excel tool, which is usually easier to use. This tool requires Excel to be installed, but does not require the ODBC driver for Excel to be installed on your computer. For more information, see https://docs.flexsim.com/en/20.2/Reference/Tools/ExcelInterface/. Excel makes a good example because most people have it, and it's easy to get the driver for it. Connection Strings Different kinds of connections require different connection strings. The following list has an example connection string for a few data sources: Excel Driver={Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)};DBQ=Path\To\Excel\File.xlsx Access Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=Path\To\Access.accdb SQLite Driver={SQLite3 ODBC Driver};Database=Path\to\sqlite.db SAP HANA DRIVER={HDBODBC};UID=myUser;PWD=myPassword;SERVERNODE=myServer:30015 Checking for Drivers Note that each connection string specifies a driver, and then additional information. The additional information depends on the driver you are using. In order to determine which drivers are on your system, you need to open the ODBC Data Sources Administrator window. To do that, hit the windows key, and then type ODBC. Then choose the option called ODBC Data Sources (64 bit). If you are running 32-bit FlexSim, open the 32 bit version. Go to the Drivers tab. Here is what my Drivers tab looks like: You can see I have drivers for Access, Excel, SQL Server, and SQLite3. I don't have drivers for SAP HANA. If I did, you'd see a driver named HDBODBC in the list. To access that kind of database, I'd need to install that driver. You can also see that the name of the driver used the the connection string must match exactly to what is shown here. Other Info You may see an exception appear when you test the connection to your database. If the view shows that the connection succeeded, then it has succeeded. The exception happens because FlexSim tries to get a list of tables from the database that it's querying. FlexSim may not guess correctly for your particular data source. That exception can be safely ignored. If you used the old db() commands in the past, consider upgrading to using the Database Connector. It will be orders of magnitude faster to read in an entire table.
View full article
Tokens and Flow Items can be very difficult to add to a chart. This is true because they don't exist on Reset, making them difficult to select. This article shows how you can use a Process Flow to allow a Statistics Collector to record a token's changing label value, and also to chart that value over time: The model for this example is attached (graphlabeldata.fsm). It is a very simple model: The Scheduled Source creates three tokens, each of which create a label called data. This label is created by choosing "Add Tracked Variable" for the value, which opens this dialog box: The reason we want the label to be a Tracked Variable is that Tracked Variables emit an OnChange event. We want to listen to that event. If you use the time interval collection method, discussed later in this article, you don't need to make the label a Tracked Variable. Each token then goes through a loop, where it waits, and then updates the value. This is meant to represent a much more complicated model, where the token travels through many activities, any of which could change its label value. For this example, the model randomly changes the value on the label. Now that we have a token and a label whose value is changing as the model runs, we can work on making a chart. We want to eventually make a Statistics Collector, but Statistics Collectors can only listen to events of objects that exist after the model has been reset. Tokens and FlowItems (along with their labels) are destroyed on reset, and so we can't listen directly to them. However, some Process Flow activities can listen to events on tokens and flowitems, and the Statistics Collector can listen to those events. For that reason, we make a second Process Flow: This flow has an Event-Triggered Source, which listens for tokens to leave the "Init Tracked Variable" activity in the first flow. When that happens, the source creates a token, and that token immediately gets a reference to the label node (note that this is different than the value of the label node). Next, the new token goes to a Start activity. The Start activity called "Log Change." This activity is just a placeholder. While you could technically live without this activity, it makes things a little clearer, as we will discuss later. Other than providing OnEntry and OnExit events, the Start activity has no internal logic whatsoever. After passing through the Log Change activity, the new token waits for the label value to change: In order to listen to this event, you can first sample a Tracked Variable in the Toolbox. This provides the OnChange event. Then you can update the Object field to the code shown above. Notice that every time this event happens, the token simply passes through the Log Change activity, and then resumes listening. When the original label value changes, it emits an OnChange event. When that event fires, the token listening to that even travels through the Log Change activity, which emits OnEntry and OnExit events. We can use these events in the Statistics Collector. The key to this technique is that we used Process Flow, which is good at listening to token and flowitem events, to generate Activity events, which can be used in the Statistics Collector. In the attached model, the first Statistics Collector is configured like this: It simply listens to the On Entry of the Log Change activity. The columns are defined as follows: The first two columns are simpler; the Time column uses the Model Date/Time option: The second column gets the ID of the token as an integer: The third column gets the current value of the Data label: Now that the Statistics Collector is set up, we can configure the chart to use this collector, and split by the Token ID. The process to record the label value every interval (rather than on every change) is very similar. The downside is that the data is less granular, but the upside is that a label doesn't have to be a Tracked Variable to be charted. The example model simply uses a Split activity to copy the data from the Event Triggered Source, and sends it to a similar listening loop: Instead of waiting for the value to change, the second token waits for a fixed time interval. A similar Statistics Collector will allow you to create the following chart: This approach works for every token created by the scheduled source. No matter how many tokens you create, each will show up on the chart:
View full article
In version 2018 and forward, you can make this chart using a Chart Template. You can simply drag and drop the chart from the dashboard library. This article may help you understand how the chart template works. You can use the Install button on the chart template to view the Process Flow, Statistics Collector, and Calculated Table that make the chart. This article reviews how to use the Zone, along with the Statistics Collector, to create a bar chart of the current work-in-progress (WIP) by item type. The method scales to as many types as you need (this example uses 30 types), and can easily adapted to text data, like SKU. An example model ( zonecontentdemo.fsm) demonstrates this method. Creating the Process Flow To create this chart, we first need to gather the data for this chart. In this case, it is easiest to build off the capabilities of the Zone. In particular, we will use Zone Partitions to categorize all of our object. After we set up the Zone, we'll use a Statistics Collector to gather the data that we need. Create a new General Process Flow. You can have as many General Process Flow objects as you want, so let's one that just deals with gathering statistics. This way, gathering statistics will not interfere with the logic in our model. The process flow should look something like this: Here's how it works. The Listen to Entry is configured to listen to a group of objects. In this case, the group contains all the sources in the model, and it's listening to the OnExit of the sources. However, it could be OnEntry or OnExit of any group of 3D objects. If you want to split the statistics by Type or SKU, then any flowitem that reaches the entry group already has the appropriate labels. In this example model, when a flowitem leaves any source, a token gets created. The token makes a label called Item that stores a reference to the created item, as shown in the following picture. The next step is to link the flowitem with the token that represents it. The Link Token to Item is configured like this: Now, the Item has a label that links back to the token. The token then enters a zone. The Zone is partitioned by type: At last, the token comes to a decide activity. The decide is configured not to release the token. The token will be released by the second part of the flow. Once the token is released, it exits the zone, and goes to a sink. The second part of the flow also has an event-triggered source, that is configured to listen to all the sinks in the model. Again, the entry objects and exit objects are arbitrary; you can gather data for the entire model, or for just a small section of the model, using this method. The event triggered source also caches off the item in a label. At this point, we need to release the token that was created when items entered the system. To do this, the Release Token activity is configured as seen here: The token created on the exit side has a reference to the item, which has a reference back to the token created on the entry side. We release this token to 1, which means connector 1. Note: We could have used a wait for event activity in the zone, and then used the match label option to wait for the correct item to leave the system. However, this method is much, much faster, especially as the number of tokens grows. Creating the Bar Chart Statistics Collector The next step is to create a statistics collector that gathers data appropriate for a bar chart. Note that this method will grow the number of rows dynamically, so that it won't matter how many types (or SKUs) your model has; you will still get one bar per type/SKU. In order to make the number of rows dynamic, we need to listen to the OnEntry and the OnExit of the zone activity: Notice the shared label on this collector. Because this label is shared, both the OnEntry and OnExit events will create this label on the data object. The value of this label is the item's type. Next, we move to the Data Recording tab. Set the Row Mode to Unique Row Values, and set the row value to Partition. This means that whenever an event fires, the statistics collector will look at the partition label on the data object. If the value is new to the statistics collector, the collector will make a new row for this value. If not, then the collector will use the row that is already present. Finally, we need to make our columns. We only need two columns: one for the Partition, and one for the Content of that partition. Both of these columns can use the Integer storage type, and raw display format. However, if your partition value was text-based, like an SKU, you should use the String storage type. The value for Partition is just the data object's Partition label: Notice that the Update option is set to When Row is Added. This way, the statistics collector knows that this value will not change, and that it's available at the time the row is created. The other column is a little harder, because we need to use the getstat command: The getstat command arguments depend on the stat you are trying to get. In this case, we are asking the zone (current, the event node) for the Partition Content statistic. We want the current value. Since this is a process flow activity, we pass in the instance as the next argument. Finally, we pass in which partition we want to get the data from, the row value. In this case, we could have identically passed in data.Partition. Also, notice that this column is updated by event dependency. To make sure this does what we want, we need to edit the event/column dependency table. We want the Content column to be updated when items enter and exit, so it should look like this: Now, open the table for the statistics collector. You should see two columns. When you run the model, rows will be added as items of different types are encountered. The table will look something like this: This screenshot came from early in the model, before all 30 types of item has been encountered, so it doesn't have 30 rows yet. Making the Bar Chart This is the easiest part. Create a new dashboard, and add a new bar chart. Point the chart at the statistics collector. For the Bar Title option, choose the Partition column. Be sure to include the Content column. Also, make sure that the "Show Percentages" checkbox on the Settings tab is cleared. The settings should look like this: The resulting chart looks something like the following image. You can set the color on the Colors tab. Ordering the Data Because the rows of this table are created dynamically, the order of the rows will likely change run to run. To force an ordering, you can use a calculated table. Since the number of rows on this table don't grow indefinitely, and the number is relatively small, it's okay to set the Update Mode on the calculated Table to always. Here's what the properties of that calculated table look like: We simply select all columns from the target collector (CurrentContent, in this case) and order it by the Partition column. That yields an ordered bar chart: Example and Additional Charts The attached example model demonstrates this method, as well as how to create a WIP By Type vs Time chart: Happy data collecting! zonecontentdemo.fsm
View full article
This article demonstrates how to use the Statistics Collector and Calculated Tables to create three utilization pie charts: a state pie chart an individual utilization pie chart a group utilization pie chart Example Model You can download that model (utilizationdemo.fsm) to see the working demonstration. The model has a Source, a Processor, a Sink, a Dispatcher, and several operators. The operators carry flow items to and from the processor, as well as operate the processor. The operators are in a group called Operators. State Pie Chart First, we need to make a Statistics Collector that collects state data for the operators. The easiest way to do that is to use the pin button to pin the State statistic for any object in the model. Use the pin button to pin a pie chart to a new dashboard. The pin button creates a new Statistics Collector, as well as a new chart. Open the properties for new Statistics Collector (double click on it in the toolbox), and change its name to OperatorStatePie. On the Data Recording tab, remove the object from the Enumerated Rows table. Using the sampler, add the Operators group (you can sample it in the toolbox). Now, when you reset and run the model, the state chart should work. Utilization Pie Chart Often, users need to combine sever states into a single value that can be used to determine the utilization of an object in the model. In order to gather this data, we can use a calculated table. Make a new Calculated Table, and give it the following query: SELECT Object, (TravelEmpty + TravelLoaded + Utilize) / Model.statisticalTime AS Busy, 1 - Busy AS NotBusy FROM OperatorStatePie This query sums the time in several states into a total, and then divides by the statistical time. Be sure to set the name of the table (the part after FROM) to the name of your Statistics Collector. Run the model for a little bit of time, and then click the Update button on the properties window. You should get a table like the following: Instead of viewing the data as just numbers, change the Display Format of each column to better represent the data. On the Display Format tab, set the Object column to display Object data. Then set the other two columns to display percentages. When you switch back to the Calculations tab, the data will be formatted: Once we get the query right, set the update mode to Always. This will updated the data in the table whenever the data is needed, including every time the chart draws. If updating the table is computationally expensive, you can use the By Interval or Manual options. Generally, a small number of rows (1-100) is small enough to use the Always mode. Regardless of the update mode, we can make a chart based on this table. In the dashboard, create a new Pie Chart. For the Data Source, select the calculated table. For the Pie Title, select the Object column. For the Center Data, select the Busy column. Be sure to include the Busy and NotBusy columns. This should show you a pie chart, comparing the operator's busy and not busy time. Group Utilization To make the final utilization chart, make a second calculated table. The query for the second table should be as follows: SELECT AVG(Busy) AS AvgBusy, AVG(NotBusy) AS AvgNotBusy FROM CalculatedTable1 Again, use the Update button to be sure the query is correct. Once it is, set the update mode to Always. Finally, you can make the pie chart for this data: Things to Try If you feel comfortable with this model, you can try a couple extra tasks, such as: Remove one of the operators from the group, reset, and run. The charts will update accordingly. Add the Processor to the group, reset, and run. The state chart should work automatically.
View full article
A narrated video demonstration of the FlexSim Healthcare Tutorial described in the FlexSim 2020 User Manual has been released! Here is a link to the written documentation: https://docs.flexsim.com/en/20.0/Tutorials/FlexSimHC/OverviewFlexSimHC/ Here's a link to the video: https://vimeo.com/394012280
View full article
This example model was created to clarify pre-emption of task sequences created in process flows. Specifically it will show: 1) that pre-empting of task sequences generated in a process flow does not require the token generating the sequence to be pre-empted. 2) that you can push a partially created sequence to a tasksequence list type. 3) how to use recursive calls to a sub flow. and afterwards: 4) how the milestone task can be used in a PF generated task sequence 5) the use a task type nodefunction Each queue is a member of the object process flow shown in the picture. Each generates a box along with a job to take it to another queue and pulls an initial task executer from the list of FreeTEs. The first task to travel and collect the box is preemptable and so we push the task sequence to a Premptable Job list before we add that travel task and pull it off the list when the task has completed. The rest of the task sequence is a standard load, travel and unload set of tasks. When the job is complete we know that the TE is free and could preempt another TE - which could be desired if the newly free TE is closer to the pickup queue than the one currently assigned. The CheckPreempt subflow is called with a label referencing the TE that has become free. This TE may not be the original TE that we pulled from the list of FreeTEs when the job was started, and so we discover the current TE by looking for the task sequence's owner object - done with the function findownerobject() since it does not cache the value in the same way ownerobject() does, and we could have had more than two TEs assigned to this tasksequence. Since the task sequence gets destroy I do this in the assign labels activity before we finish the ts: The Check Preempt subtask first tries to pull a preemptable job where the invoking TE is closer than the current executer of that job, and chooses the one where it is the most pronounced difference. The taskssequence list type has the distance field prepopulated and from that the two other field expressions otherDistance and howMuchCloser are derived. If no task sequence is chosen then we push the finishing TE to the FreeTEs list. If we choose a task sequence to preempt then we first label the token with the otherTE and the preempt the task sequence with a simple zero delay task. The we reDispatch the task sequence to the te that is calling Check Preempt. Since we now know that we've freed the otherTE from the task sequence it had we can invoke the Check Preempt subflow for that task executer, knowing that it will either get put onto the FreeTE list or get another task sequence and preempt another TE. This the is a recursive subflow call with the exit condition being that no preempatable tasksequence is found. Note that no token preemption from activities takes place and this is because the tasksequence is still tied to the token and its activity, which will still receive the call back when a task is complete, regardless of which task executer actioned the task. TEpreemptIfCloser1.fsm Next: Changing colors to indicate preemptable TEs and their destination - using milestone and nodefunction tasks. To better see the allocation and preempt activities we'll now change the model such that the TE color reflects the following: White when idle Gray when busy and not preemptable Matching the pickup queue's color when preemptable. First of all I defined two user commands - one to set the color of the TE (setTEcolor) and another (commandNode) to get the executable node of a user commands. This is so that I can add the fn() SetColor activity (it's a custom task of type nodefunction) ; specify the function using: commandNode("setTEcolor") and pass in the queue color as parameter 2 (parameter1 will be the TE that is executing this task). This means the user command code is just: Object te=param(1); Color col=param(2); te.color=col; The milestone task allows us to define a set of tasks that need to be repeat if the TE gets pre-empted. It's added again using the Custom Task activity and selecting the type as Milestone and has a parameter to specify the number of subsequent tasks that should be considered within the preemption range that will trigger tasks to be repeated. Since we want the Travel preemption to retrigger the setting of the color we set the range to 2. However since the token only needs to be notified when the travel task is complete that it should go on to the next process, and until then wait in the travel task we can uncheck the "Wait Until Complete" box of the Milestone and fn() SetColor activities: If you don't clear this for those two tasks we expect some misbehavior/desynchonization with the token. Finally there needs to be a resettrigger on the TE to set the color to white and a regular Change Visual activity after the pickup point is reached to set it to gray. Note I prefer the use of commands to sending messages, which would be another option, mostly because users can tell the intent of the call without having to inspect the message code. TEpreemptifCloser2.fsm
View full article
This article is basically a follow-up to this question: https://answers.flexsim.com/questions/98195/simultaneous-all-or-nothing-list-pulls.html In version 22.1, we added a new FlexScript feature called Coroutines. Basically, this lets you wait for events in the middle of executing FlexScript, using the "await" keyword. Recently, I decided to revisit this question (how to pull from multiple lists at once) and to see if I could use coroutines to simplify the Process Flow. The short answer is: yes! Here is a model that behaves identically (in terms of which token gets its multi-list request filled first) but replaces about 15 activities with 2 Custom Code blocks: multipullexample_coroutines_ordering_onfulfill.fsm In addition to having fewer activities, this model also runs faster (from ~12s to ~2s on my computer). To determine whether behavior was the same, I added a Statistics Collector and logged request/fullfill times and quantities. I did the same in the original model. The token IDs are off because the older model makes more tokens. Here's the older version, but with that stats collector, if you want to do your own comparisons. multipullexample.fsm The key lines of code are found in the Acquire All activity: // ~line 47 List.PullResult result = list.pull("", qty, qty, token, partition, flags); if (result.backOrder) { token.Success = 0; await result.backOrder; return 0; } // ... The new item here is the keyword "await". When the FlexScript execution reaches this line, the FlexScript execution is paused, and waits for the "awaitable" that you specify. In this case, we want to wait for the back order to be fulfilled. In both models, tokens check all lists to see if they can acquire the complete set of resources. In both models, if a token can't immediately fulfill one of its requests, tokens "go to sleep" until something changes. In the old model, tokens would "wake up" if anything was pushed to the correct list. In contrast, tokens in the new model only "wake up" if enough items are pushed to fulfill their back order. Basically, the second model has fewer false "wakeups" and so runs quite a bit faster.
View full article
This model contains example subflows for having a team of travellers travelling and loading/unloading. The RunSubflow for each creates that subflow's expected labels: team, teamItem, teamDesintation. Currently the offset locations are just a multiple of the x size of the first traveller. They are determined from the pick offset and place offset functions. Further work could add traveller un/loading and wait states and combine travel and unload tasks. TeamSubflows.fsl TeamSubflows.fsm Update 24Oct.'24 : Corrected incorrect label name in TeamTravel subflow should be looking for 'team', not 'travellers'
View full article
In this example model you'll see two identical elevator setups. However, you will notice that ElevatorBank1 allows the patient to move to the next floor properly, whereas ElevatorBank1_2 will float the patient up the network node instead of using the elevator. There are a few steps you must follow to ensure you will have a properly working elevator in your model. Make sure everything is working the way you intended, without an elevator. Now you can add in the elevator, select it and then check the 'Connect to Path' box as seen below Now ensure that the elevator is connected to the nearest path node and any nodes above it. One thing to note is that after resetting if you click on any of the Path nodes that the elevator is connected to you will see that the On Arrival Trigger now says Send Message to Request Elevator. This is the code that actually calls the elevator when a patient arrives at the node. The elevator automatically adds this to the nodes connected to when resetting, but this trigger option can be added to any node. Another good practice, especially if patients walk by the elevator without always using it, is to make a separate node off on a spur. That way patients aren't triggering the elevator every time they walk by.
View full article
This article reviews one method for making a state Gantt chart for the default and alternate state profiles: Example Model You can download the model for this walkthrough ( stateganttdemo.fsm). The model has two multiprocessors, in a Group called Multiprocessors. Each multiprocessor has two processes: Process1 and Process2. To make the chart, we will first make a Statistics Collector, and then a Calculated Table. Making the Statistics Collector Make a new Statistics Collector. On the Event Listening tab, use the Sampler to listen to On State Change of the group of multiprocessors. You can leave the parameter names alone. However, we need to add a label, so we can record the profile number. Select the new event, and then use the green plus button in the Event Labels area to add a label for this event. Set its name to ProfileNum, and its value to the following code: data.StateProfileNode?.rank The event settings should look something like the following: Next we need to set the row mode. Make sure it's set to Add Per Event, with no row value. As the final configuration step for the statistics collector, we need to set up the columns. There should be four columns in this collector: Time - In the pick options, select Time, then Model Date/Time Object - In the pick options, select IDs, then ID of Event Object Profile - Type data.ProfileNum for the value. The default storage and display format are fine. State Type the following code: data.eventNode.as(Object).stats.state(data.ProfileNum).profile[data.ToState + 1][1] Set the Storage Type to String The code is necessary because On State Change occurs before the state is set to the new state. So the code is looking up the name of the future state in the profile table. When you reset and run this model, you will see a table like the following: Making the Calculated Table Make a new Calculated Table, and give it the following query: SELECT Object, Time as StartTime, LEAD(Time) OVER (PARTITION BY Object) AS EndTime, State FROM StatisticsCollector1 WHERE Profile = 1 This query creates an Object column as well as a Time column. To get the time that the current state ends, we look to when the next state begins. The LEAD() function looks ahead in the table, and the OVER(PARTITION BY Object) clause makes sure that LEAD() makes sure to look to the next row with the same Object. We also record the state column, and filter out the standard state profile, keeping the special multiprocessor state profile. Once you get this query to work, change the Update Mode to By Interval, and set the interval to 20 or 30. Since the Statistics Collector table will get longer and longer, the query will become more and more expensive as the model runs. To control how much time is spent running the query, we use an interval. The final configuration of the Calculated Table should look like this: You will need to set the Display Format of each column on the Display Format tab (Object, Date/Time, Date/Time, and Raw). Making the Chart Make a new dashboard, and create a Gantt chart. Point it at the Calculated Table. When you do that, the chart should fill in all the other columns correctly. Charting Both State Profiles for Both Objects In order to chart both profiles on the same chart, we first need to add a column to the Statistics Collector, and then update the query in the Calculated Table. The new column should be named ObjectAndProfile, and a Storage Type of String. Use the following code for a value: data.eventNode.name + " - " + string.fromNum(data.ProfileNum.as(int)) Then change your query to the following: SELECT ObjectAndProfile, Time as StartTime, LEAD(Time) OVER (PARTITION BY ObjectAndProfile) AS EndTime, State FROM StatisticsCollector1 With these changes, you should be able to view both profiles for both multiprocessors.
View full article
Many simulation models need to show operator or machine utilization by time period of day, or per shift: shiftutilization.fsm Under Construction Summary Until this article is completed, here is the basic idea. I use a process flow to create one token per operator in the group. Each operator token creates one token per shift. The shift token creates its own Categorical Tracked Variable (which is how state information is stored on all objects). The shift token just remains alive, holding the tracked variable. The oeprator token goes into a loop, listening to OnChange of the 3D Operator's state profile node. Whenever the state changes, the token sets the shift token's state label to the same state. This essentially copies the state changes onto another tracked variable. Another child token listens for when the shift changes. When the shift changes, the operator token puts the current shift profile into some unused state (STATE_SCHEDULED_DOWN, in this case), and then applies new state changes to the correct shift token's label. A Statistics Collector listens for when the Operator token starts the main loop at the beginning of each shift. When this happens, the collector saves a row value that is an array, composed of the operator and the shift token. The row mode is set to unique, so that each unique combo of operator/profile gets its own row. Each column in the collector is made to show the data from the shift corresponding to that row. The result is the table shown in the article, which can be plotted with a pie chart as shown above.
View full article
Como criar animações customizadas em máquinas ou equipamentos no FlexSim é o que demonstramos nesse rápido vídeo. Customizamos uma animação no objeto processor, criando a movimentação real de uma maquina envolvedora de filme stretch. Veja o tutorial completo de como executar essa tarefa passo a passo, acessando o Canal Youtube da FlexSim Brasil. Ainda neste artigo, estou anexando uma pasta chamada 'Wrapper' com os arquivos em 3D para os interessados que quiserem criar as customizações acessarem e usarem os arquivos. wrapper.raranimationcreator.fsm
View full article
En este vídeo aprenderá a utilizar la propiedad Send To Port de los Objetos 3D para definir el enrutamiento de los Flowitems. Para más videos tutoriales pueden suscribirse al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips.
View full article
En este video aprenderán a crear diferentes flujos de pacientes en un centro de salud en base a la información almacenada en una etiqueta. Para más videos tutoriales pueden suscribirse al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips HC.
View full article
This article is an updated version of a previous article by @Paul Toone. Thanks to Paul for the original, which you can read here: https://answers.flexsim.com/articles/23118/flexsim-xml-18.html In FlexSim, you can save any tree (including models and libraries) in XML format. There are many advantages to using this capability in your model development, including: Since XML is an ascii/text based format, it increases the utility of using content management and versioning software such as Subversion, CVS, Microsoft Visual SourceSafe, git, etc. to manage the development of a model. These systems automatically track a history of changes to a model or project, and for ascii/text based files, they allow you to see line-by-line change logs for the files, as well as revert line-item changes if needed. With the added benefit of versioning systems comes the ability to develop a model concurrently by different modellers using a much more stream-lined method. No more saving off individual t files from one modeller's changes to a model and loading them manually into the master model. When saving to XML format, if one modeller is working on one portion of the model, only that portion of the model file changes when saved, so when the modeller checks his changes into the versioning system's repository, another modeller can automatically merge those changes into his copy. If there are conflicts, where two modellers change the same part of the model, then the versioning system has tools that allow modellers to easily see in the XML file where the conflicts occur and quickly perform the manual conflict resolution. FlexSim also has the added ability to distribute a single model across multiple XML files. This increases your control over how to manage the model development in your versioning system, and makes conflict resolution even easier. The distributed save mechanism also opens the door for much more automated control of FlexSim. By saving small pieces of your model off into separate XML files, you can auto-regenerate one or more of those XML files outside of FlexSim, consequently changing the configuration of the model for the purpose of running different scenarios. Tutorial This tutorial will take you through saving a model as XML, and how you can configure the distributed save. Build a Simple Model First build a simple example model containing a Source, Queue, Processor and Sink. Save in XML Format Save the model, and in the save dialog, choose "FlexSim XML" from the drop-down at the bottom. Save the model as "PostOfficeXML". Now you've saved the file in FlexSim's XML format. You can view the file in a regular text editor like Visual Studio Code, Notepad++, etc. It's not necessary to know all the details of the file format. In simple terms, the primary tag in FlexSim XML is the <node> tag, representing a node in FlexSim. The node's name is described in the <name> tag, and the node's data, if applicable, is described in the <data> tag. If you plan on doing automated changes, and need a more detailed definition of the xml format, you can refer to FlexSim's xml schema (see the FlexSim install directory/help/MiscellaneousConcepts/FlexSimXML.xsd for more information). Version Management with Git At FlexSim, the development team uses Git to manage collaboration between developers. Part of development includes updating the MAIN and VIEW trees. In a similar way, you can use Git to collaboratively update the MODEL tree. The standard way to use Git is as follows: Set up a remote repository for the project. There are many online services for this, including GitHub, BitBucket, and many others. Each one has different pricing and terms of use. It is also possible that your company already runs a Git hosting service where you could create your remote repository on a company server. Once you have a remove repository, each user "clones" that repository to their local computer. Everyone "pulls" the latest changes from that repository and "pushes" their changes to that repository. To manage pushing and pulling, you can use Git on the command line. However, most users interact with Git through a client. One free one is called Git Extenstions: http://gitextensions.github.io/ By itself, version management is a deep topic. You might consider reading some tutorials to become more familiar with Git concepts: https://git-scm.com/docs/gittutorial https://github.com/git-guides https://github.com/skills/introduction-to-github https://www.atlassian.com/git/tutorials/learn-git-with-bitbucket-cloud Managing Changes with Git Let's assume you have a repository with the model file already in it. Let's make a change by moving the queue and saving the model: Once you save the model, Git can show you the changes you've made. In this case, you can see the spatialx and spatialy in your Git client: At this point, you could commit and push your changes. Others could then pull the changes into the model. Notice that git looks at changes in terms of lines being added and removed. In this case, we removed two old lines and added two new lines. Distributed Save As models get bigger and more complex, it can be helpful to split a single model file into multiple XML files. To do this you create a "FlexSim File Map" file. This is an xml file with the .ffm extension. It must be placed in the same directory as the model's .fsx file, and, except for the extension, must be given the same name as the .fsx file. So, let's create that file. Let's also create a subdirectory to put the distributed files into. Now edit PostOfficeXML.ffm in a text editor. The first thing we'll do is put the node MODEL:/Tools/ Workspace into a different file. This is something you'll probably want to do always if you're using version management. The node MODEL:/Tools/Workspace stores all of the windows open in the model, so if you're editing a model and change or reposition the windows that are open, that will change the model file. For version management this is often just static that doesn't need to be tracked, so we'll have the node saved off to a different file, and then we can just never (or at least rarely) check that file into the version management system. Specify the following code: <?xml version="1.0" encoding="UTF-8"?> <flexsim-file-map version="1"> <map-node path="/Tools/Workspace" file="distributedfiles/windows.fsx" file-map-method="single-node"/> </flexsim-file-map> Save the file map, then go back into FlexSim. Save the post office model again under the same name "PostOfficeXML.fsx". Now look in the distributedfiles directory. You'll see that it contains a new xml file named windows.fsx. This xml holds the definition of the node MODEL:/Tools/Workspace. All interaction with the model remains the same, i.e. from FlexSim you just load and save the main xml model file, but now the model is distributed across multiple files. If you look a the change to PostOfficeXML.fsx in your git client, you'll see that many lines have been replaced with a single line specifying the other file: This shows how much of the content of PostOfficeXML.fsx has been moved to distributedfiles/windows.fsx. In the file map, the main document element is the <flexsim-file-map> tag. Inside the main document element should be any number of <map-node> elements. Each <map-node> element should have a path attribute, a file attribute, and a file-map-method attribute. The path attribute should specify the path, from the main saved node, to the node that is going to be saved into the distributed file. The file attribute specifies the path, relative to the saved model file, to the distributed file that you want to save. The file-map-method should either have a value of "single-node" or "split-points". In this case "single-node" means you want to save all of the active node into windows.fsx. Now let's do an example of the "split-points" method. Let's say we want to save the Tools folder in one file, the Source and Queue into another file, and the Processor and Sink in a third file. To do this, add another <map-node> element to the file map: <?xml version="1.0" encoding="UTF-8"?> <flexsim-file-map version="1"> <map-node path="/Tools/active" file="distributedfiles/windows.fsx" file-map-method="single-node"/> <map-node path="" file="distributedfiles/Tools.fsx" file-map-method="split-points"> <split-point name="Source1" file="distributedfiles/Source_Queue.fsx"/> <split-point name="Processor3" file="distributedfiles/Processor_Sink.fsx"/> </map-node> </flexsim-file-map> Now save the file, save your FlexSim model, and again you'll see new files created in the distributedfiles directory. If a element uses the "split-points" method, then it can have sub-elements. Each split-point should have a name, defining the name of a node inside the map-node's defined node, and a file attribute defining the file to save to. The map-node has a path="" attribute. This means that it applies to the root node, or the model itself. The map-node's file attribute defines the first file to use. This configuration tells FlexSim to save all sub-nodes in the model up to but not including the node named "Source1" to the file "distributedfiles/Tools.fsx", save everything from Source1 up to but not including the node named "Processor3" in "distributedfiles/Source_Queue.fsx", and save everything from Processor3 to the end in "distributedfiles/Processor_Sink.fsx". Why Distribute? The main reason you would want to distribute your model across multiple files is for version management. It allows you to easily see what you've changed in your model by seeing what files, and consequently which parts of the tree, have changed. If you inadvertently changed one piece of the model, you can easily see that and then revert that change if needed. When multiple modellers are developing the model, one modeller can essentially confine himself to one part of the model tree, and by associating that part of the tree with a specific file, it makes it much easier to merge changes from different developers, it reduces the chance of conflicts in the merge, and makes it easier to do a manual merge if needed. Note on connections: FlexSim's XML save mechanism does have one catch regarding input/output/center port connections, as well as any other mechanism that uses FlexSim's coupling data. If you change connections in one portion of your model, it will actually change the serialized values of all connections/coupling data that occur after those connections in the tree, even if those connections were not changed. This can very easily cause merge issues if multiple modellers are changing connection data. However, if you distribute the model across multiple files, connection changes where both connection end-points are within the same file will only affect that file, and will not change other files. So if you can localize large "connection sets" into individual files, you can minimize the effect of changes and subsequently minimize merge conflicts.
View full article
ContinuousUtilization_4.fsm In the attached model, you'll find a Utilization vs Time chart driven by a Statistics Collector and a Process Flow. This article is an alternative approach to the chart shown here: https://answers.flexsim.com/content/kbentry/158947/utilization-vs-time-statistics-collector.html Thanks @Felix Möhlmann for putting that article together. The concept in this model is very similar, but this is a different approach. There are pros and cons to each method. This method offers some performance improvements at the cost of writing more FlexScript code. This method also doesn't handle Warmup time, where the other method does. The general idea in this model is to keep a history of state changes for the previous hour (or other time interval). The history adds new info as states continue to change and drops old info as it "expires" by being older that the time interval. I chose to use a Bundle (stored on a token label) for the history. Bundles are optimized to add data to the end. If a bundle is paged (they are by default), then they are also optimized to remove data from the beginning. [Begin Technical Discussion - TLDR; Bundles are fast at removing the first row and adding new rows] A paged bundle keeps blocks of memory (pages) for a certain number of rows. If a page fills up, it allocates another page. It keeps track of the location in the page for the next entry. Similarly, it keeps track of the location of the first entry on the start page. If you remove the first row, that start location is moved forward on the page. If the start location gets to the end of a page, that page is dropped and the start location moves to the next page. [End Technical Discussion] The Process Flow maintains the history table. Whenever the object's state changes, the Process Flow adds a new entry to the history table. This part is straightforward. The tricky part is properly "expiring" the data. To do this, a second set of tokens wait for the last of the oldest data to expire. When the oldest set of data should expire, the tokens remove any rows that are too old and then wait for the next oldest row. If there are no expired rows, the tokens wait for one "time interval" and then check again. This is because if there is no expired data, data can't expire for at least one time interval. In this way, the history table is always kept up to date. The Statistics Collector is configured to post-process the history table. It calculates the total utilization, including the state that the object is currently in, as well as the state being "phased out" from the history table. The Statistics Collector can do this calculation at any point in the model. So the sampling interval (called Resolution in the model) is independent from the time window.
View full article
Attached is an example DLL Maker project that calls a Java function using the Java Native Interface (JNI) from C++. The steps for how this works are outlined below: 1. Install the Java Development Kit (JDK). In creating this example, I used OpenJDK installed via Microsoft's special Installer of Visual Studio Code for Java developers. 2. Write a Java program. In my example, I created a simple Hello class with an intMethod() as described in IBM's JNI example tutorial. 3. Compile the .java file into a .class file. I did this using the Command Prompt and executing the following command: javac Hello.java 4. Configure the DLL Maker project to include the JNI library: In VC++ Directories > Include Directories, add the following two directories: C:\Program Files (x86)\Java\jdk1.8.0_131\include C:\Program Files (x86)\Java\jdk1.8.0_131\include\win32 In Linker > Input > Additional Dependencies, add jvm.lib In Linker > Input > Delay Loaded Dlls, add jvm.dll 5. Include jni.h in your code. (Line 51 of mydll.cpp) 6. Create a JVM in your code. (Lines 61-102 of mydll.cpp) 7. Get a handle to the method you want to call and then call it. (Lines 118-126 of mydll.cpp) 8. Connect a User Command in FlexSim to fire that C++ code that executes Java code. See the dll_maker_test_model.fsm included in the attached zipped directory. * Note: this example was built with 32-bit FlexSim because the JDK I installed was 32-bit. Using a 64-bit JVM is beyond the scope of this simple example and is left as an exercise for the user.
View full article
Quando desenvolvemos um modelo de simulação que possua diversas ou até mesmo elevado número de variáveis de entrada, faz-se necessário analisar diversos cenários dentro de um simulador para termos o maior número de detalhes possíveis com relação ao comportamento do sistema. No FlexSim, o recurso que permite esse tipo de análise é o Experimenter. Com ele, podemos estimar como as variáveis de entrada, afetam as respostas de um experimento e pode-se planejar de forma racional os cenários a serem executados. O vídeo explicando um pouco sobre experimentos e também falando sobre o Experimenter, encontra-se em nosso Canal no Youtube.
View full article
Top Contributors