FlexSim Knowledge Base
Announcements, articles, and guides to help you take your simulations to the next level.
Sort by:
This is a demo model for the new warehouse functionality found in version 2019 Update 2: warehouse-demo-model.fsm The basic premise of this model is that items of a particular type come in, and must be placed in slots for that type. Orders also come in, requiring items of a particular type, that must be retrieved from storage. The model is meant to be a general concept model. It demonstrates the use of many of the new features in 19.2, and embodies some high-level "how-to's" of warehousing that are discussed in the user manual. Most logic for the model is implemented in a process flow. The process flow logic is separated into three main categories, namely initial inventory, inbound, and outbound processes. Further, the outbound process demonstrates both random-based order generation as well as history-based order generation. Initial Inventory The model includes a Global Table of Initial Inventory. The process flow's initial inventory section reads this table, and then creates items and places them into slots based on that initial inventory. This logic relies on the Address Scheme defined in the Storage System object, and uses direct addressing to get a slot using Storage.system.getSlot(). Inbound I use the process flow to assign a slot to each incoming item. I use an Assign Labels activity called Find Slot to do this. This uses a pick list option that wraps a call to Storage.system.findSlot(). The query matches the Type of the item with the Type of the slot, and also ensures that the target slot has space to fit the incoming item. The query also randomizes the order. Randomizing the order would likely not be necessary in most situations, but it makes the demo look nice. If the Find Slot activity properly finds a slot to store the item, then I go ahead and assign the the item to that slot, and have an operator place it in the rack. Outbound I also use the process flow to generate orders, and to reserve items in the storage system for those orders. In most warehouse simulations, order generation can be driven in two ways. First, you can use random probability distributions to generate orders based on general throughput metrics. Second, order generation can be based on historical data. This model gives an example of each method. In the random method, orders are generated randomly every ~30 seconds. Each order includes a number of SKU line items (again, random) and each line item includes a quantity of that SKU (again, random). Order tokens spawn line item tokens, which in turn spawn tokens associated with individual picks (the Fill Out Individual Picks process). For each pick, the token finds an item in storage that matches the target SKU. This is an Assign Labels activity (Find Item by SKU) with a pick option that wraps a call to Storage.system.findItem(). It finds an item that matches the required type, again using a query. Once the item is found, it makes reserves the item as "outbound" by assigning the Storage.Item.assignedSlot property to null (Set to Outbound activity). This ensures that no other process will find that same item for picking. The history-based order generation process uses much of the same functionality as the random-based, but it instead reads an "OrderHistory" table to determine when orders are started and what those orders contain. The OrderHistory table represents a simplified format for what you would likely see in a standard orders table. First, the process flow creates a transformed table that aggregates each order into a single row (this could technically be done as post-import code, but I do it in the process flow for visibility). Then the process flow loops through that transformed table, waiting for the start time of each order, then spawning that order. Custom Rack Visualization I have also customized the visualization of the racks. I have added a text to the front (and bottom on the floor) of a rack slot that will show the address of that slot. Further I've given the text a background that is color-coded to the SKU that that slot is designated to store. This was all done through the Storage System's Visualizations tab. I customized the Rack visualization.
View full article
Hi everyone, recently @Arun Kr posted this idea to add a utilization vs. time chart to the available chart types in FlexSim. I had previously build a relatively easy to set up Statistics Collector for use in our models. I have since cleaned up the design a bit and thought to post it here, since this seems to be a commonly desired feature. utilization_vs_time_collector_24_0_fm.fsm All necessary setup is done through labels of the collector. The first three are actually identical to labels found on the default Statistics Collector behind a state bar chart. - Objects should point at a group that contains all objects the collector should track the utilization for. - StateTable is a reference to the state table that will be used to determine which state counts as 'utilized'. - StateProfile is the rank of the state profile that should be read on the linked objects (0 for default state profile). - MeasureInterval is the time frame (in model units) over which the collector will take the average of the utilization. - NumSubIntervals determines how often that measurement is actually taken. In the example image above (and the attached model) the collector measures the average utilization over the last 3600s 12 times within that interval of every 300s. Each meausurement still denotes the utilization over the complete "MeasureInterval". The graph on the left takes a measurement every 5 minutes, the one on the right every 60 minutes. Each point on both graphs represents the average utilization over the previous hour from that time point in time. - StoredTimeMap is used to allow the collector to correctly function past a warmup time, by storing the total utilized value of each object up to that point. This should no be manually changed. Since this last label has to be automatically reset, remember to save any changes made to the other labels by hitting "Apply". The collector works by keeping an array of 'total utilized time' value for each object as row labels. Whenever a measurement is taken, the current value is added to the array and the oldest one is discarded. The difference between the newest and oldest value is used to calculate the average utilization over the measurement interval. The "NumSubIntervals" label essentially just controls how many entries are kept in that array. To copy the collector into another model you can create a fresh collector in the target model. Then copy the node of this collector from the tree of the attached model and paste it over the node of the fresh collector. I hope this can help to speed up the modeling process for some people (at least until a chart like this is hopefully implemented in FlexSim) or serve as inspiration for how one can use the Statistics Collector. I might update the post with a user library version if I get to creating it (and if there is demand for it). Best regards Felix Edit: Added user library with the collector as a dragable icon to the attached files. Edit2: I noticed a bug while using the collector. Having the tracked objects enter states that are marked as "excluded" in the state table would lead to incorrect utilization values (possibly even below 0 or above 100%). Replaced the library with an updated version that fixes this. Edit3: I fixed another bug that resulted in a wrong utilization value for the first measurement after the warmup time if the object spent time in an excluded state prior to the warmup. utilization-vs-time-collector-library-20250212.fsl
View full article
This model and library will allow you to produce a heat map of anything moving in the model - including AGVs and Flowitems. To add this to a model is simply a matter of : 1) Load the attached user library 2) Add objects to the group HeatMapMembers 3) Drop a heat map object (cylinder) into the model - reset and run. With this updated version you can now you can now have multiple mapper objects in the same model showing different groups - made easier by the addition of a 'groupName' label on the mapper. You can easily change the height at which the map is draw using the 'zdraw' label and alter the sampling interval and grid size using the 'heatInterval' and 'resolution' labels. The resolution is the number of divisions per model length unit. In the example model set to metres, a value of 2 gives 4 divisions per square metre. Currently non-flowitems are set to ignore time when the object is in an idle state. HeatMapAnything.fsl HeatMapAnything.fsm
View full article
Attached is an example model and user library comprising commands to return an array of objects whose bounding boxes intersect, and a Collision Detection object to drop into your model. The Collision Detection has a ticker interval label to adjust the frequency of checks and will switch the colliding objects to selected. It looks for two groups - "Obstacles" containing static objects in the scene (which may be overlapping and not recorded as collisions) and "Colliders" which are the objects navigating the scene and should be checked for intersecting bounding boxes. In the example model I'm adding the flowitem when it is created using Group("Colliders").addMember(item) The detector code is on its FlexScript label, 'analyseScene', which is first scheduled to run by the object's reset trigger. collisionDetection3.fsm BBCollisionDetection2.fsl
View full article
This article reviews one method for making a state Gantt chart for the default and alternate state profiles: Example Model You can download the model for this walkthrough ( stateganttdemo.fsm). The model has two multiprocessors, in a Group called Multiprocessors. Each multiprocessor has two processes: Process1 and Process2. To make the chart, we will first make a Statistics Collector, and then a Calculated Table. Making the Statistics Collector Make a new Statistics Collector. On the Event Listening tab, use the Sampler to listen to On State Change of the group of multiprocessors. You can leave the parameter names alone. However, we need to add a label, so we can record the profile number. Select the new event, and then use the green plus button in the Event Labels area to add a label for this event. Set its name to ProfileNum, and its value to the following code: data.StateProfileNode?.rank The event settings should look something like the following: Next we need to set the row mode. Make sure it's set to Add Per Event, with no row value. As the final configuration step for the statistics collector, we need to set up the columns. There should be four columns in this collector: Time - In the pick options, select Time, then Model Date/Time Object - In the pick options, select IDs, then ID of Event Object Profile - Type data.ProfileNum for the value. The default storage and display format are fine. State Type the following code: data.eventNode.as(Object).stats.state(data.ProfileNum).profile[data.ToState + 1][1] Set the Storage Type to String The code is necessary because On State Change occurs before the state is set to the new state. So the code is looking up the name of the future state in the profile table. When you reset and run this model, you will see a table like the following: Making the Calculated Table Make a new Calculated Table, and give it the following query: SELECT Object, Time as StartTime, LEAD(Time) OVER (PARTITION BY Object) AS EndTime, State FROM StatisticsCollector1 WHERE Profile = 1 This query creates an Object column as well as a Time column. To get the time that the current state ends, we look to when the next state begins. The LEAD() function looks ahead in the table, and the OVER(PARTITION BY Object) clause makes sure that LEAD() makes sure to look to the next row with the same Object. We also record the state column, and filter out the standard state profile, keeping the special multiprocessor state profile. Once you get this query to work, change the Update Mode to By Interval, and set the interval to 20 or 30. Since the Statistics Collector table will get longer and longer, the query will become more and more expensive as the model runs. To control how much time is spent running the query, we use an interval. The final configuration of the Calculated Table should look like this: You will need to set the Display Format of each column on the Display Format tab (Object, Date/Time, Date/Time, and Raw). Making the Chart Make a new dashboard, and create a Gantt chart. Point it at the Calculated Table. When you do that, the chart should fill in all the other columns correctly. Charting Both State Profiles for Both Objects In order to chart both profiles on the same chart, we first need to add a column to the Statistics Collector, and then update the query in the Calculated Table. The new column should be named ObjectAndProfile, and a Storage Type of String. Use the following code for a value: data.eventNode.name + " - " + string.fromNum(data.ProfileNum.as(int)) Then change your query to the following: SELECT ObjectAndProfile, Time as StartTime, LEAD(Time) OVER (PARTITION BY ObjectAndProfile) AS EndTime, State FROM StatisticsCollector1 With these changes, you should be able to view both profiles for both multiprocessors.
View full article
FlexSim's Webserver is a query-driven manager and communication interface for FlexSim. It allows you to run FlexSim models through a web browser like Google Chrome, FireFox, Internet Explorer, etc. Since the FlexSim Web Server is a basic service to allow FlexSim to be served to a browser, you may decide you want a way to proxy to this service through a full service web server that you can control security and authentication through. This guide will walk you through proxying to the FlexSim Web Server through Apache web server. Install the FlexSim Web Server Program Download and install the FlexSim Web Server from https://account.flexsim.com Edit C:\Program Files (x86)\FlexSim Web Server\flexsim webserver configuration.txt Change the port from 80 to 8080 Start the FlexSim Web Server by double clicking flexsimserver.bat Test the server by going to http://127.0.01:8080 It should look like this: Install Apache Web Server for Windows Download Apache x64 from https://www.apachehaus.com/cgi-bin/download.plx Extract the httpd-<version>.zip file you have downloaded Go into the httpd-<version> folder you have extracted and copy the Apache24 (or Apache25) folder to C:\ Install Apache Dependencies Make sure you have the Microsoft Visual C++ 2008 SP1 package installed. You can get it here: https://www.microsoft.com/en-US/download/details.aspx?id=26368 Download the vcredist_x64.exe package and run the installation Configure Apache Open C:\Apache24\conf\httpd.conf in a text editor Look for the following lines in this configuration file and remove the # character #LoadModule proxy_module modules/mod_proxy.so #LoadModule proxy_http_module modules/mod_proxy_http.so #LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so Those modules should now look like this: LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so At the bottom of the httpd.conf file, add these 3 lines and save the file: ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ AllowEncodedSlashes On Run Apache Web Server In a file explorer or CMD line prompt, browse to the C:\Apache24\bin folder and run httpd.exe Now that you have the FlexSim Web Server proxied through Apache, you may decide you want to configure Apache to handle security, authentication and customization. Since this is out of the scope of this guide, you can find details on the Internet that can guide you to setting these customizations up. A few resources you may consider: https://community.apachefriends.org/f/ https://stackoverflow.com
View full article
This article is an updated version of a previous article by @Paul Toone. Thanks to Paul for the original, which you can read here: https://answers.flexsim.com/articles/23118/flexsim-xml-18.html In FlexSim, you can save any tree (including models and libraries) in XML format. There are many advantages to using this capability in your model development, including: Since XML is an ascii/text based format, it increases the utility of using content management and versioning software such as Subversion, CVS, Microsoft Visual SourceSafe, git, etc. to manage the development of a model. These systems automatically track a history of changes to a model or project, and for ascii/text based files, they allow you to see line-by-line change logs for the files, as well as revert line-item changes if needed. With the added benefit of versioning systems comes the ability to develop a model concurrently by different modellers using a much more stream-lined method. No more saving off individual t files from one modeller's changes to a model and loading them manually into the master model. When saving to XML format, if one modeller is working on one portion of the model, only that portion of the model file changes when saved, so when the modeller checks his changes into the versioning system's repository, another modeller can automatically merge those changes into his copy. If there are conflicts, where two modellers change the same part of the model, then the versioning system has tools that allow modellers to easily see in the XML file where the conflicts occur and quickly perform the manual conflict resolution. FlexSim also has the added ability to distribute a single model across multiple XML files. This increases your control over how to manage the model development in your versioning system, and makes conflict resolution even easier. The distributed save mechanism also opens the door for much more automated control of FlexSim. By saving small pieces of your model off into separate XML files, you can auto-regenerate one or more of those XML files outside of FlexSim, consequently changing the configuration of the model for the purpose of running different scenarios. Tutorial This tutorial will take you through saving a model as XML, and how you can configure the distributed save. Build a Simple Model First build a simple example model containing a Source, Queue, Processor and Sink. Save in XML Format Save the model, and in the save dialog, choose "FlexSim XML" from the drop-down at the bottom. Save the model as "PostOfficeXML". Now you've saved the file in FlexSim's XML format. You can view the file in a regular text editor like Visual Studio Code, Notepad++, etc. It's not necessary to know all the details of the file format. In simple terms, the primary tag in FlexSim XML is the <node> tag, representing a node in FlexSim. The node's name is described in the <name> tag, and the node's data, if applicable, is described in the <data> tag. If you plan on doing automated changes, and need a more detailed definition of the xml format, you can refer to FlexSim's xml schema (see the FlexSim install directory/help/MiscellaneousConcepts/FlexSimXML.xsd for more information). Version Management with Git At FlexSim, the development team uses Git to manage collaboration between developers. Part of development includes updating the MAIN and VIEW trees. In a similar way, you can use Git to collaboratively update the MODEL tree. The standard way to use Git is as follows: Set up a remote repository for the project. There are many online services for this, including GitHub, BitBucket, and many others. Each one has different pricing and terms of use. It is also possible that your company already runs a Git hosting service where you could create your remote repository on a company server. Once you have a remove repository, each user "clones" that repository to their local computer. Everyone "pulls" the latest changes from that repository and "pushes" their changes to that repository. To manage pushing and pulling, you can use Git on the command line. However, most users interact with Git through a client. One free one is called Git Extenstions: http://gitextensions.github.io/ By itself, version management is a deep topic. You might consider reading some tutorials to become more familiar with Git concepts: https://git-scm.com/docs/gittutorial https://github.com/git-guides https://github.com/skills/introduction-to-github https://www.atlassian.com/git/tutorials/learn-git-with-bitbucket-cloud Managing Changes with Git Let's assume you have a repository with the model file already in it. Let's make a change by moving the queue and saving the model: Once you save the model, Git can show you the changes you've made. In this case, you can see the spatialx and spatialy in your Git client: At this point, you could commit and push your changes. Others could then pull the changes into the model. Notice that git looks at changes in terms of lines being added and removed. In this case, we removed two old lines and added two new lines. Distributed Save As models get bigger and more complex, it can be helpful to split a single model file into multiple XML files. To do this you create a "FlexSim File Map" file. This is an xml file with the .ffm extension. It must be placed in the same directory as the model's .fsx file, and, except for the extension, must be given the same name as the .fsx file. So, let's create that file. Let's also create a subdirectory to put the distributed files into. Now edit PostOfficeXML.ffm in a text editor. The first thing we'll do is put the node MODEL:/Tools/ Workspace into a different file. This is something you'll probably want to do always if you're using version management. The node MODEL:/Tools/Workspace stores all of the windows open in the model, so if you're editing a model and change or reposition the windows that are open, that will change the model file. For version management this is often just static that doesn't need to be tracked, so we'll have the node saved off to a different file, and then we can just never (or at least rarely) check that file into the version management system. Specify the following code: <?xml version="1.0" encoding="UTF-8"?> <flexsim-file-map version="1"> <map-node path="/Tools/Workspace" file="distributedfiles/windows.fsx" file-map-method="single-node"/> </flexsim-file-map> Save the file map, then go back into FlexSim. Save the post office model again under the same name "PostOfficeXML.fsx". Now look in the distributedfiles directory. You'll see that it contains a new xml file named windows.fsx. This xml holds the definition of the node MODEL:/Tools/Workspace. All interaction with the model remains the same, i.e. from FlexSim you just load and save the main xml model file, but now the model is distributed across multiple files. If you look a the change to PostOfficeXML.fsx in your git client, you'll see that many lines have been replaced with a single line specifying the other file: This shows how much of the content of PostOfficeXML.fsx has been moved to distributedfiles/windows.fsx. In the file map, the main document element is the <flexsim-file-map> tag. Inside the main document element should be any number of <map-node> elements. Each <map-node> element should have a path attribute, a file attribute, and a file-map-method attribute. The path attribute should specify the path, from the main saved node, to the node that is going to be saved into the distributed file. The file attribute specifies the path, relative to the saved model file, to the distributed file that you want to save. The file-map-method should either have a value of "single-node" or "split-points". In this case "single-node" means you want to save all of the active node into windows.fsx. Now let's do an example of the "split-points" method. Let's say we want to save the Tools folder in one file, the Source and Queue into another file, and the Processor and Sink in a third file. To do this, add another <map-node> element to the file map: <?xml version="1.0" encoding="UTF-8"?> <flexsim-file-map version="1"> <map-node path="/Tools/active" file="distributedfiles/windows.fsx" file-map-method="single-node"/> <map-node path="" file="distributedfiles/Tools.fsx" file-map-method="split-points"> <split-point name="Source1" file="distributedfiles/Source_Queue.fsx"/> <split-point name="Processor3" file="distributedfiles/Processor_Sink.fsx"/> </map-node> </flexsim-file-map> Now save the file, save your FlexSim model, and again you'll see new files created in the distributedfiles directory. If a element uses the "split-points" method, then it can have sub-elements. Each split-point should have a name, defining the name of a node inside the map-node's defined node, and a file attribute defining the file to save to. The map-node has a path="" attribute. This means that it applies to the root node, or the model itself. The map-node's file attribute defines the first file to use. This configuration tells FlexSim to save all sub-nodes in the model up to but not including the node named "Source1" to the file "distributedfiles/Tools.fsx", save everything from Source1 up to but not including the node named "Processor3" in "distributedfiles/Source_Queue.fsx", and save everything from Processor3 to the end in "distributedfiles/Processor_Sink.fsx". Why Distribute? The main reason you would want to distribute your model across multiple files is for version management. It allows you to easily see what you've changed in your model by seeing what files, and consequently which parts of the tree, have changed. If you inadvertently changed one piece of the model, you can easily see that and then revert that change if needed. When multiple modellers are developing the model, one modeller can essentially confine himself to one part of the model tree, and by associating that part of the tree with a specific file, it makes it much easier to merge changes from different developers, it reduces the chance of conflicts in the merge, and makes it easier to do a manual merge if needed. Note on connections: FlexSim's XML save mechanism does have one catch regarding input/output/center port connections, as well as any other mechanism that uses FlexSim's coupling data. If you change connections in one portion of your model, it will actually change the serialized values of all connections/coupling data that occur after those connections in the tree, even if those connections were not changed. This can very easily cause merge issues if multiple modellers are changing connection data. However, if you distribute the model across multiple files, connection changes where both connection end-points are within the same file will only affect that file, and will not change other files. So if you can localize large "connection sets" into individual files, you can minimize the effect of changes and subsequently minimize merge conflicts.
View full article
Tokens and Flow Items can be very difficult to add to a chart. This is true because they don't exist on Reset, making them difficult to select. This article shows how you can use a Process Flow to allow a Statistics Collector to record a token's changing label value, and also to chart that value over time: The model for this example is attached (graphlabeldata.fsm). It is a very simple model: The Scheduled Source creates three tokens, each of which create a label called data. This label is created by choosing "Add Tracked Variable" for the value, which opens this dialog box: The reason we want the label to be a Tracked Variable is that Tracked Variables emit an OnChange event. We want to listen to that event. If you use the time interval collection method, discussed later in this article, you don't need to make the label a Tracked Variable. Each token then goes through a loop, where it waits, and then updates the value. This is meant to represent a much more complicated model, where the token travels through many activities, any of which could change its label value. For this example, the model randomly changes the value on the label. Now that we have a token and a label whose value is changing as the model runs, we can work on making a chart. We want to eventually make a Statistics Collector, but Statistics Collectors can only listen to events of objects that exist after the model has been reset. Tokens and FlowItems (along with their labels) are destroyed on reset, and so we can't listen directly to them. However, some Process Flow activities can listen to events on tokens and flowitems, and the Statistics Collector can listen to those events. For that reason, we make a second Process Flow: This flow has an Event-Triggered Source, which listens for tokens to leave the "Init Tracked Variable" activity in the first flow. When that happens, the source creates a token, and that token immediately gets a reference to the label node (note that this is different than the value of the label node). Next, the new token goes to a Start activity. The Start activity called "Log Change." This activity is just a placeholder. While you could technically live without this activity, it makes things a little clearer, as we will discuss later. Other than providing OnEntry and OnExit events, the Start activity has no internal logic whatsoever. After passing through the Log Change activity, the new token waits for the label value to change: In order to listen to this event, you can first sample a Tracked Variable in the Toolbox. This provides the OnChange event. Then you can update the Object field to the code shown above. Notice that every time this event happens, the token simply passes through the Log Change activity, and then resumes listening. When the original label value changes, it emits an OnChange event. When that event fires, the token listening to that even travels through the Log Change activity, which emits OnEntry and OnExit events. We can use these events in the Statistics Collector. The key to this technique is that we used Process Flow, which is good at listening to token and flowitem events, to generate Activity events, which can be used in the Statistics Collector. In the attached model, the first Statistics Collector is configured like this: It simply listens to the On Entry of the Log Change activity. The columns are defined as follows: The first two columns are simpler; the Time column uses the Model Date/Time option: The second column gets the ID of the token as an integer: The third column gets the current value of the Data label: Now that the Statistics Collector is set up, we can configure the chart to use this collector, and split by the Token ID. The process to record the label value every interval (rather than on every change) is very similar. The downside is that the data is less granular, but the upside is that a label doesn't have to be a Tracked Variable to be charted. The example model simply uses a Split activity to copy the data from the Event Triggered Source, and sends it to a similar listening loop: Instead of waiting for the value to change, the second token waits for a fixed time interval. A similar Statistics Collector will allow you to create the following chart: This approach works for every token created by the scheduled source. No matter how many tokens you create, each will show up on the chart:
View full article
This example model was created to clarify pre-emption of task sequences created in process flows. Specifically it will show: 1) that pre-empting of task sequences generated in a process flow does not require the token generating the sequence to be pre-empted. 2) that you can push a partially created sequence to a tasksequence list type. 3) how to use recursive calls to a sub flow. and afterwards: 4) how the milestone task can be used in a PF generated task sequence 5) the use a task type nodefunction Each queue is a member of the object process flow shown in the picture. Each generates a box along with a job to take it to another queue and pulls an initial task executer from the list of FreeTEs. The first task to travel and collect the box is preemptable and so we push the task sequence to a Premptable Job list before we add that travel task and pull it off the list when the task has completed. The rest of the task sequence is a standard load, travel and unload set of tasks. When the job is complete we know that the TE is free and could preempt another TE - which could be desired if the newly free TE is closer to the pickup queue than the one currently assigned. The CheckPreempt subflow is called with a label referencing the TE that has become free. This TE may not be the original TE that we pulled from the list of FreeTEs when the job was started, and so we discover the current TE by looking for the task sequence's owner object - done with the function findownerobject() since it does not cache the value in the same way ownerobject() does, and we could have had more than two TEs assigned to this tasksequence. Since the task sequence gets destroy I do this in the assign labels activity before we finish the ts: The Check Preempt subtask first tries to pull a preemptable job where the invoking TE is closer than the current executer of that job, and chooses the one where it is the most pronounced difference. The taskssequence list type has the distance field prepopulated and from that the two other field expressions otherDistance and howMuchCloser are derived. If no task sequence is chosen then we push the finishing TE to the FreeTEs list. If we choose a task sequence to preempt then we first label the token with the otherTE and the preempt the task sequence with a simple zero delay task. The we reDispatch the task sequence to the te that is calling Check Preempt. Since we now know that we've freed the otherTE from the task sequence it had we can invoke the Check Preempt subflow for that task executer, knowing that it will either get put onto the FreeTE list or get another task sequence and preempt another TE. This the is a recursive subflow call with the exit condition being that no preempatable tasksequence is found. Note that no token preemption from activities takes place and this is because the tasksequence is still tied to the token and its activity, which will still receive the call back when a task is complete, regardless of which task executer actioned the task. TEpreemptIfCloser1.fsm Next: Changing colors to indicate preemptable TEs and their destination - using milestone and nodefunction tasks. To better see the allocation and preempt activities we'll now change the model such that the TE color reflects the following: White when idle Gray when busy and not preemptable Matching the pickup queue's color when preemptable. First of all I defined two user commands - one to set the color of the TE (setTEcolor) and another (commandNode) to get the executable node of a user commands. This is so that I can add the fn() SetColor activity (it's a custom task of type nodefunction) ; specify the function using: commandNode("setTEcolor") and pass in the queue color as parameter 2 (parameter1 will be the TE that is executing this task). This means the user command code is just: Object te=param(1); Color col=param(2); te.color=col; The milestone task allows us to define a set of tasks that need to be repeat if the TE gets pre-empted. It's added again using the Custom Task activity and selecting the type as Milestone and has a parameter to specify the number of subsequent tasks that should be considered within the preemption range that will trigger tasks to be repeated. Since we want the Travel preemption to retrigger the setting of the color we set the range to 2. However since the token only needs to be notified when the travel task is complete that it should go on to the next process, and until then wait in the travel task we can uncheck the "Wait Until Complete" box of the Milestone and fn() SetColor activities: If you don't clear this for those two tasks we expect some misbehavior/desynchonization with the token. Finally there needs to be a resettrigger on the TE to set the color to white and a regular Change Visual activity after the pickup point is reached to set it to gray. Note I prefer the use of commands to sending messages, which would be another option, mostly because users can tell the intent of the call without having to inspect the message code. TEpreemptifCloser2.fsm
View full article
In version 2018 and on, you can make this chart by dragging the Throughput Per Hour by Type template from the dashboard library. If you install the template (available on the Advanced tab), you will see a Process Flow and a Statistics Collector appear in your toolbox. One of the most common questions from FlexSim users is as follows: How do I make a chart that shows the output every hour? You can make this chart in three steps. Configure the Statistics Collector First, you need a Statistics Collector. Make a new one in the toolbox (click the green plus button, select Statistics, and then select Statistics Collector). On the event listening tab, use the green plus button to add a timer event, and configure as shown here: This timer event will fire every hour (every 3600 seconds) in the model. Notice the shared label, that is storing all members of the Processors group as an array. We will use this label in the next step. Once you have configured the timer, then you need to set up the row mode for this collector. We want one row per processor, and we need to use the Processors label as the row value. Since the Processors label is an array, we will get three rows per timer event, each row corresponding to a processor. Finally, we can add the columns. The three columns are as follows: Time - use the pick list to select Model Date/Time from the Time menu Object - use the pick list to select ID of row value from the IDs menu Output - use the pick list to select Statistic by Object from the Object Statistics menu Use data.rowValue as the object value in the popup If you use the pick options to choose these options, then the storage type and display format options should be set automatically. With these three columns in place, we can watch the table populate. Reset and run the model at high speed. Every model hour, you should see a new set of rows appear, one for each processor in the group. The table will look something like this: Configure the Calculated Table The Statistics Collector table from the previous steps is close to what we want, except that the output value always increases as the model runs. But what about the output for just a single hour? To get that value, we can use a Calculated Table. Make a new calculated table, and give it the following query (in the Query field): SELECT Time, Object, ISNULL(Output - LAG(Output) OVER (PARTITION BY Object), 0) AS OutputPerHour FROM StatisticsCollector1 This query uses SQL window functions. Basically, it says that each row's value should subtract the previous row's value for the object. In addition, if that value is NULL (because it's the first row), then just use a value. If you reset and run the model, so that the collector table has at least a few rows in it, click the Update button to run the query. Notice that the Time and Object columns show numbers. This is because the Calculated Table can't infer the formatting of the column. To set the formatting, use the Display Format Tab. You may also wish the table to update every hour, with the Statistics Collector. Make the Chart Now that our data is correct, we can make a chart. Make a new dashboard, and create a Time Plot chart. Point the chart to the calculated table. Let's use the Time column for the X values, and let's use the OutputPerHour column for the Y values. In addition, make sure to split by the Object column. If the calculated table updates every hour, then running the model should create the chart shown at the beginning of the model. Here is the model used to create this chart (should work in 2017 Update 2 Beta or later; beta must be built on or after August 21, 2017). outputperhourdemo.fsm
View full article
This article demonstrates how to use the Statistics Collector and Calculated Tables to create three utilization pie charts: a state pie chart an individual utilization pie chart a group utilization pie chart Example Model You can download that model (utilizationdemo.fsm) to see the working demonstration. The model has a Source, a Processor, a Sink, a Dispatcher, and several operators. The operators carry flow items to and from the processor, as well as operate the processor. The operators are in a group called Operators. State Pie Chart First, we need to make a Statistics Collector that collects state data for the operators. The easiest way to do that is to use the pin button to pin the State statistic for any object in the model. Use the pin button to pin a pie chart to a new dashboard. The pin button creates a new Statistics Collector, as well as a new chart. Open the properties for new Statistics Collector (double click on it in the toolbox), and change its name to OperatorStatePie. On the Data Recording tab, remove the object from the Enumerated Rows table. Using the sampler, add the Operators group (you can sample it in the toolbox). Now, when you reset and run the model, the state chart should work. Utilization Pie Chart Often, users need to combine sever states into a single value that can be used to determine the utilization of an object in the model. In order to gather this data, we can use a calculated table. Make a new Calculated Table, and give it the following query: SELECT Object, (TravelEmpty + TravelLoaded + Utilize) / Model.statisticalTime AS Busy, 1 - Busy AS NotBusy FROM OperatorStatePie This query sums the time in several states into a total, and then divides by the statistical time. Be sure to set the name of the table (the part after FROM) to the name of your Statistics Collector. Run the model for a little bit of time, and then click the Update button on the properties window. You should get a table like the following: Instead of viewing the data as just numbers, change the Display Format of each column to better represent the data. On the Display Format tab, set the Object column to display Object data. Then set the other two columns to display percentages. When you switch back to the Calculations tab, the data will be formatted: Once we get the query right, set the update mode to Always. This will updated the data in the table whenever the data is needed, including every time the chart draws. If updating the table is computationally expensive, you can use the By Interval or Manual options. Generally, a small number of rows (1-100) is small enough to use the Always mode. Regardless of the update mode, we can make a chart based on this table. In the dashboard, create a new Pie Chart. For the Data Source, select the calculated table. For the Pie Title, select the Object column. For the Center Data, select the Busy column. Be sure to include the Busy and NotBusy columns. This should show you a pie chart, comparing the operator's busy and not busy time. Group Utilization To make the final utilization chart, make a second calculated table. The query for the second table should be as follows: SELECT AVG(Busy) AS AvgBusy, AVG(NotBusy) AS AvgNotBusy FROM CalculatedTable1 Again, use the Update button to be sure the query is correct. Once it is, set the update mode to Always. Finally, you can make the pie chart for this data: Things to Try If you feel comfortable with this model, you can try a couple extra tasks, such as: Remove one of the operators from the group, reset, and run. The charts will update accordingly. Add the Processor to the group, reset, and run. The state chart should work automatically.
View full article
One of the most powerful uses of a Fixed Resource Process Flow is to unify a set of machines and operators as a reusable entity. For example, many factories have several production lines. Simulation modelers often wonder what would happen if they could either open or close more production lines. If you were to open a new line, would the increased output be worth the cost? Or if you were to close a line, would the factory still be able to meet demand? These are the kinds of questions modelers often ask, and by using a Fixed Resource Process Flow, you can answer this question. This article demonstrates how to use a Fixed Resource Process Flow (or FR Flow) to coordinate several machines and operators as a single entity. The example model represents a staging area, where product is staged before being loaded on to a truck. However, most of the concepts that are discussed could be used in any Fixed Resource Process Flow. Creating a Collection of Objects When you set out to build a reusable FR Flow, it is usually best to start by making a single instance of the entity you want to replicate. Often, it is convenient to build that entity on a Plane. When you drag an object in to a Plane, it is owned by the Plane, which makes the Plane a natural collection. If you copy and then paste the Plane, the copy will have all the same objects as the original Plane. This makes it very easy to make another instance of your entity: just copy and paste the first. If the Plane is attached to an FR Flow, then the copy will also be attached to the same Flow, and will then behave the exact same way. In the example model, you will find a Plane with a processor, five queues, and two operators. This makes up a single staging area. The Plane itself is attached to the FR Flow, meaning an instance of the FR Flow will run for the Plane. It also means the Plane can be referenced by the value current . (You may find it helpful to set the reset position of any operators, especially if they wander off the plane at any point during the model run.) Using Process Flow Variables In order to drive the logic in your entity using an FR Flow, you will need to be able to reference the objects in each entity, so that they will be easily accessible by tokens. For example, if items are processed on Machine A, Machine B, and Machine C, in a production line, then you would want an easy way to reference those machines in each line. This is where Process Flow Variables come in. In the example model, you will find the following Process Flow Variables: Every staging area has a Packer, a Shipper, and a Palletizer, each referenced using the node command. Remember that current is the instance object, which is the Plane. Once these variables are in place, you could create an FR Flow like the following: If you ran this model with the Plane shown previously (complete with correctly named objects) and this flow with the shown variables, then the Packer operator would travel to the Palletizer. If you then copied the Plane, you would see that all Packers go to their respective Palletizers, as shown below: Using Variables and Resources Together Sometimes, you may want to adjust the number of operators per line, or even the number of operators in a specific role. To do this, you can again you Process Flow Variables. The sample model includes these variables: The sample model also includes these resources; the properties for the Shipper Resource are shown: Because this resource is Local, it is as if each attached object has its own version of this resource. Because it references a 3D operator, it will make copies of that operator when the model resets. The number of copies it creates will depend on the local value of the ShipperCount variable. To edit the value for a particular instance object, click on that object in the 3D view, and edit the value in the quick properties window. Disabling Entities Often, modelers simply want to "turn off" parts of their model. If that part of the model is controlled by a fixed resource flow, then you can easily accomplish this task. In the example model, you will find the following set of activities: The Areas resource is numeric, and it is global. The Limit Areas activity is configured so that any token that can't acquire the resource immediately goes to the Area Disabled sink. If the token that controls the process dies, then the area for that token is effectively disabled. The example model uses a Process Flow Variable to control how many areas can be active at a time: The Areas resource uses this value to control how many shipping areas are actually active. If this number is smaller than the number of attached objects, then some of the areas won't run. Note that the order the objects are attached in matters. The first attached area will be the first to generate a token in the Init Area source, and so will be the first one to acquire the area. Using Process Flow Variables in the Experimenter/Optimizer Currently, only Global and User Accessible variables can be accessed in the Experimenter. In the sample model, the only variable that can readily be used in an experiment is the TotalAreas variable, which dictates how many areas can be active. This allows the Experimenter or Optimizer to disable lines. To make it possible for the Experimenter or Optimizer to vary the number of Shippers or Packers per Area, the ShipperCount and PackerCount could be changed to Global Variables. You could put labels on the instance object (the Plane) that are read by the variable, like so: Then the experimenter could set the label value on the Plane object that is attached to the FR Flow. The Experimenter would set the label, which would affect the value of the Variable, which would affect how many copies of the Packer were created by the Packer resource. You could do the same for the ShipperCount variable. Sample Model The attached model (usingfixedresources-6.fsm) provides a working model that can demonstrate the principles in this article. It comes with only one Plane object attached to the FR Flow. Here are some things to try with the model: Reset and run the model to see how it behaves with a single area, where that single area has only one Packer and one Shipper. Create a copy (use copy/paste) of the Plane. Reset and run the model again to see how a second area is automatically driven by the same flow. You may need to adjust the TotalAreas variable if the second doesn't run. Adjust the number of Packers and Shippers (using the Process Flow Variables) on each Plane. Reset and run the model to see how adding more packers and shippers affects it. Create many copies of the first Plane (it is best to set the PackerCount and ShipperCount to 1 before copying). Reset and run the model, to observe the effect. Add or remove some staging areas (queues) to an Area. This model handles those changes as well (in the Put Stations On List part of the flow). Try to set up an experiment or optimization, that varies how many lines are active, and how many Packers and Shippers are in each. In general, play around with the sample model, or try this technique on your own. Other Applications This technique can be used to create production/packaging lines, complex machines, or any conglomerate of Fixed Resources and Task Executers. If you follow the methods outlined in this article, you will be able to create additional instances of complicated custom objects with ease. You will also be able to configure your model for use with the experimenter or optimizer.
View full article
If you have a contiguous conveyor network you can just route items using Conveyor.sendItem() and FlexSim will guide the item to the destination, passing through inline and side transfers as required for the shortest path. If between some conveyors you use exit and entry transfers, perhaps to easily add elevators and shuttles as transports between them - then you'll normally be faced with adding logic to figure out which exit transfer to go to and which port to take from that transfer - and in a large model that logic can be extensive and hard to maintain. The attached model and library provides commands for automated routing through multiple conveyor sub-sections connected through exit/entry transfers, to conveyor points and to connected fixed resources. This means that you may no longer have to write sendTo code with case statements on each exitTransfer to determine which port an item should exit through – nor possibly need to have decision points with case logic to decide the destination for Conveyor.sendItem(). In the example model three sources create items with random destinations which are routed through the conveyor system, transfers and port automatically to arrive at the correct destinations – some of the ports having transport to perform the move. To make this work in any model you should load the user library which will auto-install a set of user commands and a General Process Flow. The first step is to run the user command ‘createAllTravelMaps()’ which will calculate all the reachable destinations (decision points, stations, pes, attached fixed resources and transfers) from all the conveyor points and entry/exit transfers) along with estimates of the conveytime (from the conveyor class). This information consolidated to create the shortest routes and is stored in a label ‘travelMap’ on each decision point, station, pe and transfer. To make use of the travelMap data there are three additional user commands supplied that are intended to be used directly by the modeller: getNextConveyPoint(thispoint, destination) – returns the next point to send an item to from this point in order to ultimately reach the destination. getConveyExitPort(exitTransfer, destination) – returns the port through which an item should exit the exitTransfer in order to reach the destination. getConveyItemsNextConveyPoint(item, destination) – returns the next point to which an item should travel to reach the destination from its current position on a conveyor. The simple process flow in the example and library is set to listen to the Group members of EntryTransfers and ExitTransfers in order to lookup the ‘destination’ label and either sends the item to the next point or in the case of the exit transfers, overrides the sendTo port with the value from the map. I’ve added some documentation to the user commands which you can access easily via the command helper: ConveyorTravelMaps_0.3.fsl ConveyorTravelMapExample.fsm You may find createTravelMaps() takes a while which is why a progress bar has been added. You may not need all points to be evaluated exhaustively so the option to pass in a flag indicating to only start evaluation from Entry Transfers is given, which will create somewhat incomplete maps for intermediate points. A future refinement would be to account for transport time from exit transfers either by recording the times or providing port list with the expected times. Clearly if you make changes to your transfer positions or conveyor layout you should rerun createAllTravelMaps.
View full article
This article describes an example of Reinforcement Learning being used to solve scheduling problems. See the model and python files in the attached zip file. SchedulingRL.zip Problem Description This model represents a generic sheet metal processing plant. There are four machines in series. Each job requires time on all four machines. Jobs come in batches of 10. A poor sequence of jobs will cause blocking between items, lowering throughput. If the time between batches is long, such as a shift or a day, you could use the optimizer to determine the best sequence. If the time between batches is short, however, the optimizer may not be feasible. For real sequencing problems, the time to find a good sequence can be anywhere from 5 minutes to an hour, or even longer. This makes it impractical for high-velocity situations. The attached model requests a decision every time the first machine in the series is available. The only action is an index for the Nth available job. So the decision can be interpreted as "which job should I do next?" Solution The general solution is to use reinforcement learning. However, this problem required customized python scripts: The model uses custom parameters for observations. This allows arbitrary values for observations. The model uses a custom observation space. The observations include a table of the required times at each station for the remaining jobs. They also include an array of the in-progress jobs and their predicted remaining times. By using a Dict space, the python scripts can combine all the observations into a single space. The model uses an Action Mask. An Action Mask is a binary array with one value per value of the action. This tells the RL algorithm about invalid options. The python scripts require the sb3-contrib package. Use pip install sb3-contrib to install it. Results After training for 500k time-steps, the agent learns to choose jobs moderately well. If you run the inference script, you can use the experimenter to compare a random policy to a trained agent:
View full article
In the attached model we use a Timetable and two MTBF/MTTR objects to define Schedule Loss, Availability Loss (breakdowns) and an element of Performance loss due to short stops (state Down). The processor sends 'bad' items to port 2 based on the send to percentage which account for QualityLosses. The processor's 'best' processing time per part (5 seconds) is stored as a label, while the processing time itself is a triangular distribution with the minumum as 5 seconds - so it also contributes to performance loss. When the Type of the item changes a setup time occurs which is the final contributor to performance loss. Two state profiles were added to the processor - one to track production time and another for availability. An object process flow on the processor detects production profile state changes (between On and off shift) and regular Flexsim state changes and determines the availability state that should prevail. A user command getOEEstat is used to access the values which it calculates on demand and stores in a label on the processor called statsMap. The syntax for this command is: getOEEstat(myMachine,"OEE") The list of stats: "ScheduleLoss","AvailabilityLoss","PerformanceLoss","QualityLoss","IdealProdTime","AvailabilityRatio", "QualityRatio","PerformanceRatio", "IdealProdTime", "RunTime", "OEE", "TEE". A group was used to indicate which objects have their OEE tracked, and a stats Collector reads the group members and adds rows at reset. Finally Performance Measures were added for the stats for processor 1. Processor_OEE_2.fsm 2023-08-22 Update: Added 'TEE' stat.
View full article
Attached is an example model that shows how you can use reversible conveyors for routing/sorting of items. ReversibleRoutingConveyor.fsm Traditionally we've sort of warned against using reversible conveyors for purposes other than accumulation buffers. The main reason I've been hesitant to promote alternative uses is that the routing system for conveyors is, and will continue to be, static. In other words, the path finding algorithm to send an item through a network of conveyors to a destination point does not change when one or more conveyors in the system is reversed. Put another way, "for routing purposes, ..., the conveyor is always assumed to be conveying in its original direction." This naturally makes using reversible conveyors for routing more complex. However, as long as you can still work within those constraints, you can actually get the desired outcome. The attached model does this by 'shortening' the routing decision so that it can always route onto conveyors in their forward direction. The attached model sorts items by color by moving them between two conveyor via a reversible conveyor that conveys in either direction as needed. In order to still work within the 'static routing' rule, I split the reversible conveyor into two separate conveyors that are directed into each other. This way, I can route items onto the reversible section by referencing a conveyor whose primary forward direction always diverts from the line a given item is on. The critical element is that I have to always make sure that when one conveyor is moving forward, the other is reversed, and vice versa. I also have to implement some mutual exclusion, blocking some items so they aren't sent to conveyors in opposite directions. This all is done in the process flow. I honestly don't know how close this example is to a real life situation. We've just received some requests for a reversible conveyor that can do more than just accumulation buffers, and routing/sorting is the main alternate example I can think of. This is one way you can achieve such a result.
View full article
The attached model contains functionality to depict the item flow as a 3D map using a FlowMapper3D Object (cylinder) and an associated Object Process Flow. Additionally a 'kpi' label on the object gives an indication of layout performance to which you can link and observe as you interact/experiment on the layout. To set this up in your model you'll need to add a Group of objects whose entry events will be used by the mapper - calling that Group "FlowMapperObjects". Then you'll need to add a ColorPalette called "HeatPalette". Finally you'll want to copy the FlowMapper3D object and the FlowMapperProcess to your model. Note that there is a boolean label 'showPercents' on the FlowMapper3D object to tell it whether to show percentage text or the number of flowitems for each location pair. 3DFlowMapper.fsm
View full article
FlexSim 2022 introduced a Reinforcement Learning tool that enables you to configure your model to be used as an environment for reinforcement learning algorithms. That tool makes connecting to FlexSim from a reinforcement learning algorithm easier, but that tool is not absolutely necessary for this type of connectivity. The same socket communication protocols that are used by that tool are available generally in FlexScript. Attached (ChangeoverTimesRL_V22.0.fsm) is the FlexSim 2022 model that you build as part of the Using Reinforcement Learning documentation that walks you through the process of building and preparing a FlexSim model for reinforcement learning, training an agent within that model environment, evaluating the performance of the trained reinforcement learning model, and using that trained model in a real production environment. Also attached (ChangeoverTimesRL_V6.0.fsm) is a model built with FlexSim 6.0.2 from 2012 that does the exact same thing, but with custom FlexScript user commands instead of the Reinforcement Learning tool. You can use this model with the example python scripts and FlexSim 6.0.2 in the same way that you can use the other model with those same scripts in FlexSim 2022. I'm providing this FlexSim 6 model as an example that demonstrates how you can communicate between FlexSim and other programs. The Reinforcement Learning tool certainly makes this type of communication easier and simpler, with a nice UI for specifying RL-specific parameters, but the fundamental principles of how this works have been available in FlexSim for many years using FlexScript. Hopefully this example can help teach and inspire those who wish to control or communicate with FlexSim from external sources for purposes other than just reinforcement learning. FlexSim is flexible, and the possibilities are endless.
View full article
What is ODBC? FlexSim can use ODBC, and ODBC can connect to many different kinds of data sources, including files (like Excel) and databases. It allows you to use SQL queries to get data from any supported data source. ODBC uses a driver determine how to get data from a given data source. A driver is translator between ODBC and whatever data source you are querying. For example, there is a driver for Excel files. If you have that driver on your system, you could use ODBC to query data from Excel files. As another example, there is a driver for SAP HANA, the database for SAP. If you have that driver, then ODBC will know how to talk to SAP. This means that if the correct driver is installed, and is working correctly, that you can use FlexSim to query any ODBC data source. Using the Database Connector FlexSim has a tool called the Database Connector. You can use it to configure a connection to a database. This article uses that tool. For more information, see https://docs.flexsim.com/en/20.2/Reference/Tools/DatabaseConnectors/ As an example, let's say that you want to connect to Excel using ODBC. Assuming you have the Excel ODBC driver installed on your system, you can configure a Database Connector to look like this: Note that you must specify the full connection string. In the connection string, you can see that the driver and the file are specified. Then you can query the data in a given sheet with a query like this: SELECT * FROM [Sheet1$] You can find more information on querying Excel files at websites like this: https://querysurge.zendesk.com/hc/en-us/articles/205766136-Writing-SQL-Queries-against-Excel-files-Excel-SQL- If you use Office 365, you may need to install the Microsoft Access Database Engine 2016 Redistributable. This includes newer drivers for Excel and Access Be sure to install it with the /quiet flag on the command line. Instructions can be found in this troubleshooting guide: https://docs.microsoft.com/en-us/office/troubleshoot/access/cannot-use-odbc-or-oledb Note that FlexSim has an Excel tool, which is usually easier to use. This tool requires Excel to be installed, but does not require the ODBC driver for Excel to be installed on your computer. For more information, see https://docs.flexsim.com/en/20.2/Reference/Tools/ExcelInterface/. Excel makes a good example because most people have it, and it's easy to get the driver for it. Connection Strings Different kinds of connections require different connection strings. The following list has an example connection string for a few data sources: Excel Driver={Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)};DBQ=Path\To\Excel\File.xlsx Access Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=Path\To\Access.accdb SQLite Driver={SQLite3 ODBC Driver};Database=Path\to\sqlite.db SAP HANA DRIVER={HDBODBC};UID=myUser;PWD=myPassword;SERVERNODE=myServer:30015 Checking for Drivers Note that each connection string specifies a driver, and then additional information. The additional information depends on the driver you are using. In order to determine which drivers are on your system, you need to open the ODBC Data Sources Administrator window. To do that, hit the windows key, and then type ODBC. Then choose the option called ODBC Data Sources (64 bit). If you are running 32-bit FlexSim, open the 32 bit version. Go to the Drivers tab. Here is what my Drivers tab looks like: You can see I have drivers for Access, Excel, SQL Server, and SQLite3. I don't have drivers for SAP HANA. If I did, you'd see a driver named HDBODBC in the list. To access that kind of database, I'd need to install that driver. You can also see that the name of the driver used the the connection string must match exactly to what is shown here. Other Info You may see an exception appear when you test the connection to your database. If the view shows that the connection succeeded, then it has succeeded. The exception happens because FlexSim tries to get a list of tables from the database that it's querying. FlexSim may not guess correctly for your particular data source. That exception can be safely ignored. If you used the old db() commands in the past, consider upgrading to using the Database Connector. It will be orders of magnitude faster to read in an entire table.
View full article
In version 2018 and forward, you can make this chart using a Chart Template. You can simply drag and drop the chart from the dashboard library. This article may help you understand how the chart template works. You can use the Install button on the chart template to view the Process Flow, Statistics Collector, and Calculated Table that make the chart. This article reviews how to use the Zone, along with the Statistics Collector, to create a bar chart of the current work-in-progress (WIP) by item type. The method scales to as many types as you need (this example uses 30 types), and can easily adapted to text data, like SKU. An example model ( zonecontentdemo.fsm) demonstrates this method. Creating the Process Flow To create this chart, we first need to gather the data for this chart. In this case, it is easiest to build off the capabilities of the Zone. In particular, we will use Zone Partitions to categorize all of our object. After we set up the Zone, we'll use a Statistics Collector to gather the data that we need. Create a new General Process Flow. You can have as many General Process Flow objects as you want, so let's one that just deals with gathering statistics. This way, gathering statistics will not interfere with the logic in our model. The process flow should look something like this: Here's how it works. The Listen to Entry is configured to listen to a group of objects. In this case, the group contains all the sources in the model, and it's listening to the OnExit of the sources. However, it could be OnEntry or OnExit of any group of 3D objects. If you want to split the statistics by Type or SKU, then any flowitem that reaches the entry group already has the appropriate labels. In this example model, when a flowitem leaves any source, a token gets created. The token makes a label called Item that stores a reference to the created item, as shown in the following picture. The next step is to link the flowitem with the token that represents it. The Link Token to Item is configured like this: Now, the Item has a label that links back to the token. The token then enters a zone. The Zone is partitioned by type: At last, the token comes to a decide activity. The decide is configured not to release the token. The token will be released by the second part of the flow. Once the token is released, it exits the zone, and goes to a sink. The second part of the flow also has an event-triggered source, that is configured to listen to all the sinks in the model. Again, the entry objects and exit objects are arbitrary; you can gather data for the entire model, or for just a small section of the model, using this method. The event triggered source also caches off the item in a label. At this point, we need to release the token that was created when items entered the system. To do this, the Release Token activity is configured as seen here: The token created on the exit side has a reference to the item, which has a reference back to the token created on the entry side. We release this token to 1, which means connector 1. Note: We could have used a wait for event activity in the zone, and then used the match label option to wait for the correct item to leave the system. However, this method is much, much faster, especially as the number of tokens grows. Creating the Bar Chart Statistics Collector The next step is to create a statistics collector that gathers data appropriate for a bar chart. Note that this method will grow the number of rows dynamically, so that it won't matter how many types (or SKUs) your model has; you will still get one bar per type/SKU. In order to make the number of rows dynamic, we need to listen to the OnEntry and the OnExit of the zone activity: Notice the shared label on this collector. Because this label is shared, both the OnEntry and OnExit events will create this label on the data object. The value of this label is the item's type. Next, we move to the Data Recording tab. Set the Row Mode to Unique Row Values, and set the row value to Partition. This means that whenever an event fires, the statistics collector will look at the partition label on the data object. If the value is new to the statistics collector, the collector will make a new row for this value. If not, then the collector will use the row that is already present. Finally, we need to make our columns. We only need two columns: one for the Partition, and one for the Content of that partition. Both of these columns can use the Integer storage type, and raw display format. However, if your partition value was text-based, like an SKU, you should use the String storage type. The value for Partition is just the data object's Partition label: Notice that the Update option is set to When Row is Added. This way, the statistics collector knows that this value will not change, and that it's available at the time the row is created. The other column is a little harder, because we need to use the getstat command: The getstat command arguments depend on the stat you are trying to get. In this case, we are asking the zone (current, the event node) for the Partition Content statistic. We want the current value. Since this is a process flow activity, we pass in the instance as the next argument. Finally, we pass in which partition we want to get the data from, the row value. In this case, we could have identically passed in data.Partition. Also, notice that this column is updated by event dependency. To make sure this does what we want, we need to edit the event/column dependency table. We want the Content column to be updated when items enter and exit, so it should look like this: Now, open the table for the statistics collector. You should see two columns. When you run the model, rows will be added as items of different types are encountered. The table will look something like this: This screenshot came from early in the model, before all 30 types of item has been encountered, so it doesn't have 30 rows yet. Making the Bar Chart This is the easiest part. Create a new dashboard, and add a new bar chart. Point the chart at the statistics collector. For the Bar Title option, choose the Partition column. Be sure to include the Content column. Also, make sure that the "Show Percentages" checkbox on the Settings tab is cleared. The settings should look like this: The resulting chart looks something like the following image. You can set the color on the Colors tab. Ordering the Data Because the rows of this table are created dynamically, the order of the rows will likely change run to run. To force an ordering, you can use a calculated table. Since the number of rows on this table don't grow indefinitely, and the number is relatively small, it's okay to set the Update Mode on the calculated Table to always. Here's what the properties of that calculated table look like: We simply select all columns from the target collector (CurrentContent, in this case) and order it by the Partition column. That yields an ordered bar chart: Example and Additional Charts The attached example model demonstrates this method, as well as how to create a WIP By Type vs Time chart: Happy data collecting! zonecontentdemo.fsm
View full article
Top Contributors