FlexSim Knowledge Base
Announcements, articles, and guides to help you take your simulations to the next level.
Sort by:
This model contains example subflows for having a team of travellers travelling and loading/unloading. The RunSubflow for each creates that subflow's expected labels: team, teamItem, teamDesintation. Currently the offset locations are just a multiple of the x size of the first traveller. They are determined from the pick offset and place offset functions. Further work could add traveller un/loading and wait states and combine travel and unload tasks. TeamSubflows.fsl TeamSubflows.fsm Update 24Oct.'24 : Corrected incorrect label name in TeamTravel subflow should be looking for 'team', not 'travellers'
View full article
If you have a contiguous conveyor network you can just route items using Conveyor.sendItem() and FlexSim will guide the item to the destination, passing through inline and side transfers as required for the shortest path. If between some conveyors you use exit and entry transfers, perhaps to easily add elevators and shuttles as transports between them - then you'll normally be faced with adding logic to figure out which exit transfer to go to and which port to take from that transfer - and in a large model that logic can be extensive and hard to maintain. The attached model and library provides commands for automated routing through multiple conveyor sub-sections connected through exit/entry transfers, to conveyor points and to connected fixed resources. This means that you may no longer have to write sendTo code with case statements on each exitTransfer to determine which port an item should exit through – nor possibly need to have decision points with case logic to decide the destination for Conveyor.sendItem(). In the example model three sources create items with random destinations which are routed through the conveyor system, transfers and port automatically to arrive at the correct destinations – some of the ports having transport to perform the move. To make this work in any model you should load the user library which will auto-install a set of user commands and a General Process Flow. The first step is to run the user command ‘createAllTravelMaps()’ which will calculate all the reachable destinations (decision points, stations, pes, attached fixed resources and transfers) from all the conveyor points and entry/exit transfers) along with estimates of the conveytime (from the conveyor class). This information consolidated to create the shortest routes and is stored in a label ‘travelMap’ on each decision point, station, pe and transfer. To make use of the travelMap data there are three additional user commands supplied that are intended to be used directly by the modeller: getNextConveyPoint(thispoint, destination) – returns the next point to send an item to from this point in order to ultimately reach the destination. getConveyExitPort(exitTransfer, destination) – returns the port through which an item should exit the exitTransfer in order to reach the destination. getConveyItemsNextConveyPoint(item, destination) – returns the next point to which an item should travel to reach the destination from its current position on a conveyor. The simple process flow in the example and library is set to listen to the Group members of EntryTransfers and ExitTransfers in order to lookup the ‘destination’ label and either sends the item to the next point or in the case of the exit transfers, overrides the sendTo port with the value from the map. I’ve added some documentation to the user commands which you can access easily via the command helper: ConveyorTravelMaps_0.3.fsl ConveyorTravelMapExample.fsm You may find createTravelMaps() takes a while which is why a progress bar has been added. You may not need all points to be evaluated exhaustively so the option to pass in a flag indicating to only start evaluation from Entry Transfers is given, which will create somewhat incomplete maps for intermediate points. A future refinement would be to account for transport time from exit transfers either by recording the times or providing port list with the expected times. Clearly if you make changes to your transfer positions or conveyor layout you should rerun createAllTravelMaps.
View full article
The attached model contains an object process flow for a basicFR to sit between a regular conveyor and a Mass Flow. It looks at the interval between the discrete items leaving the regular conveyor and creates a new generative rate when it detects a change, allowing you to convert a stream of hundreds of items to singular events for the MFC as needed and capitalize on the advantages of MFCs. It's a simple process flow: To use just connect a BasicFR between the two conveyors and add it to the object process flow as a member instance. This is a proof of concept and does not currently handle accumulation across the interface, or aggregation from multiple discrete conveyors. MFC_RateMaker.fsm
View full article
This article is an updated version of a previous article by @Paul Toone. Thanks to Paul for the original, which you can read here: https://answers.flexsim.com/articles/23118/flexsim-xml-18.html In FlexSim, you can save any tree (including models and libraries) in XML format. There are many advantages to using this capability in your model development, including: Since XML is an ascii/text based format, it increases the utility of using content management and versioning software such as Subversion, CVS, Microsoft Visual SourceSafe, git, etc. to manage the development of a model. These systems automatically track a history of changes to a model or project, and for ascii/text based files, they allow you to see line-by-line change logs for the files, as well as revert line-item changes if needed. With the added benefit of versioning systems comes the ability to develop a model concurrently by different modellers using a much more stream-lined method. No more saving off individual t files from one modeller's changes to a model and loading them manually into the master model. When saving to XML format, if one modeller is working on one portion of the model, only that portion of the model file changes when saved, so when the modeller checks his changes into the versioning system's repository, another modeller can automatically merge those changes into his copy. If there are conflicts, where two modellers change the same part of the model, then the versioning system has tools that allow modellers to easily see in the XML file where the conflicts occur and quickly perform the manual conflict resolution. FlexSim also has the added ability to distribute a single model across multiple XML files. This increases your control over how to manage the model development in your versioning system, and makes conflict resolution even easier. The distributed save mechanism also opens the door for much more automated control of FlexSim. By saving small pieces of your model off into separate XML files, you can auto-regenerate one or more of those XML files outside of FlexSim, consequently changing the configuration of the model for the purpose of running different scenarios. Tutorial This tutorial will take you through saving a model as XML, and how you can configure the distributed save. Build a Simple Model First build a simple example model containing a Source, Queue, Processor and Sink. Save in XML Format Save the model, and in the save dialog, choose "FlexSim XML" from the drop-down at the bottom. Save the model as "PostOfficeXML". Now you've saved the file in FlexSim's XML format. You can view the file in a regular text editor like Visual Studio Code, Notepad++, etc. It's not necessary to know all the details of the file format. In simple terms, the primary tag in FlexSim XML is the <node> tag, representing a node in FlexSim. The node's name is described in the <name> tag, and the node's data, if applicable, is described in the <data> tag. If you plan on doing automated changes, and need a more detailed definition of the xml format, you can refer to FlexSim's xml schema (see the FlexSim install directory/help/MiscellaneousConcepts/FlexSimXML.xsd for more information). Version Management with Git At FlexSim, the development team uses Git to manage collaboration between developers. Part of development includes updating the MAIN and VIEW trees. In a similar way, you can use Git to collaboratively update the MODEL tree. The standard way to use Git is as follows: Set up a remote repository for the project. There are many online services for this, including GitHub, BitBucket, and many others. Each one has different pricing and terms of use. It is also possible that your company already runs a Git hosting service where you could create your remote repository on a company server. Once you have a remove repository, each user "clones" that repository to their local computer. Everyone "pulls" the latest changes from that repository and "pushes" their changes to that repository. To manage pushing and pulling, you can use Git on the command line. However, most users interact with Git through a client. One free one is called Git Extenstions: http://gitextensions.github.io/ By itself, version management is a deep topic. You might consider reading some tutorials to become more familiar with Git concepts: https://git-scm.com/docs/gittutorial https://github.com/git-guides https://github.com/skills/introduction-to-github https://www.atlassian.com/git/tutorials/learn-git-with-bitbucket-cloud Managing Changes with Git Let's assume you have a repository with the model file already in it. Let's make a change by moving the queue and saving the model: Once you save the model, Git can show you the changes you've made. In this case, you can see the spatialx and spatialy in your Git client: At this point, you could commit and push your changes. Others could then pull the changes into the model. Notice that git looks at changes in terms of lines being added and removed. In this case, we removed two old lines and added two new lines. Distributed Save As models get bigger and more complex, it can be helpful to split a single model file into multiple XML files. To do this you create a "FlexSim File Map" file. This is an xml file with the .ffm extension. It must be placed in the same directory as the model's .fsx file, and, except for the extension, must be given the same name as the .fsx file. So, let's create that file. Let's also create a subdirectory to put the distributed files into. Now edit PostOfficeXML.ffm in a text editor. The first thing we'll do is put the node MODEL:/Tools/ Workspace into a different file. This is something you'll probably want to do always if you're using version management. The node MODEL:/Tools/Workspace stores all of the windows open in the model, so if you're editing a model and change or reposition the windows that are open, that will change the model file. For version management this is often just static that doesn't need to be tracked, so we'll have the node saved off to a different file, and then we can just never (or at least rarely) check that file into the version management system. Specify the following code: <?xml version="1.0" encoding="UTF-8"?> <flexsim-file-map version="1"> <map-node path="/Tools/Workspace" file="distributedfiles/windows.fsx" file-map-method="single-node"/> </flexsim-file-map> Save the file map, then go back into FlexSim. Save the post office model again under the same name "PostOfficeXML.fsx". Now look in the distributedfiles directory. You'll see that it contains a new xml file named windows.fsx. This xml holds the definition of the node MODEL:/Tools/Workspace. All interaction with the model remains the same, i.e. from FlexSim you just load and save the main xml model file, but now the model is distributed across multiple files. If you look a the change to PostOfficeXML.fsx in your git client, you'll see that many lines have been replaced with a single line specifying the other file: This shows how much of the content of PostOfficeXML.fsx has been moved to distributedfiles/windows.fsx. In the file map, the main document element is the <flexsim-file-map> tag. Inside the main document element should be any number of <map-node> elements. Each <map-node> element should have a path attribute, a file attribute, and a file-map-method attribute. The path attribute should specify the path, from the main saved node, to the node that is going to be saved into the distributed file. The file attribute specifies the path, relative to the saved model file, to the distributed file that you want to save. The file-map-method should either have a value of "single-node" or "split-points". In this case "single-node" means you want to save all of the active node into windows.fsx. Now let's do an example of the "split-points" method. Let's say we want to save the Tools folder in one file, the Source and Queue into another file, and the Processor and Sink in a third file. To do this, add another <map-node> element to the file map: <?xml version="1.0" encoding="UTF-8"?> <flexsim-file-map version="1"> <map-node path="/Tools/active" file="distributedfiles/windows.fsx" file-map-method="single-node"/> <map-node path="" file="distributedfiles/Tools.fsx" file-map-method="split-points"> <split-point name="Source1" file="distributedfiles/Source_Queue.fsx"/> <split-point name="Processor3" file="distributedfiles/Processor_Sink.fsx"/> </map-node> </flexsim-file-map> Now save the file, save your FlexSim model, and again you'll see new files created in the distributedfiles directory. If a element uses the "split-points" method, then it can have sub-elements. Each split-point should have a name, defining the name of a node inside the map-node's defined node, and a file attribute defining the file to save to. The map-node has a path="" attribute. This means that it applies to the root node, or the model itself. The map-node's file attribute defines the first file to use. This configuration tells FlexSim to save all sub-nodes in the model up to but not including the node named "Source1" to the file "distributedfiles/Tools.fsx", save everything from Source1 up to but not including the node named "Processor3" in "distributedfiles/Source_Queue.fsx", and save everything from Processor3 to the end in "distributedfiles/Processor_Sink.fsx". Why Distribute? The main reason you would want to distribute your model across multiple files is for version management. It allows you to easily see what you've changed in your model by seeing what files, and consequently which parts of the tree, have changed. If you inadvertently changed one piece of the model, you can easily see that and then revert that change if needed. When multiple modellers are developing the model, one modeller can essentially confine himself to one part of the model tree, and by associating that part of the tree with a specific file, it makes it much easier to merge changes from different developers, it reduces the chance of conflicts in the merge, and makes it easier to do a manual merge if needed. Note on connections: FlexSim's XML save mechanism does have one catch regarding input/output/center port connections, as well as any other mechanism that uses FlexSim's coupling data. If you change connections in one portion of your model, it will actually change the serialized values of all connections/coupling data that occur after those connections in the tree, even if those connections were not changed. This can very easily cause merge issues if multiple modellers are changing connection data. However, if you distribute the model across multiple files, connection changes where both connection end-points are within the same file will only affect that file, and will not change other files. So if you can localize large "connection sets" into individual files, you can minimize the effect of changes and subsequently minimize merge conflicts.
View full article
This article describes an example of Reinforcement Learning being used to solve scheduling problems. See the model and python files in the attached zip file. SchedulingRL.zip Problem Description This model represents a generic sheet metal processing plant. There are four machines in series. Each job requires time on all four machines. Jobs come in batches of 10. A poor sequence of jobs will cause blocking between items, lowering throughput. If the time between batches is long, such as a shift or a day, you could use the optimizer to determine the best sequence. If the time between batches is short, however, the optimizer may not be feasible. For real sequencing problems, the time to find a good sequence can be anywhere from 5 minutes to an hour, or even longer. This makes it impractical for high-velocity situations. The attached model requests a decision every time the first machine in the series is available. The only action is an index for the Nth available job. So the decision can be interpreted as "which job should I do next?" Solution The general solution is to use reinforcement learning. However, this problem required customized python scripts: The model uses custom parameters for observations. This allows arbitrary values for observations. The model uses a custom observation space. The observations include a table of the required times at each station for the remaining jobs. They also include an array of the in-progress jobs and their predicted remaining times. By using a Dict space, the python scripts can combine all the observations into a single space. The model uses an Action Mask. An Action Mask is a binary array with one value per value of the action. This tells the RL algorithm about invalid options. The python scripts require the sb3-contrib package. Use pip install sb3-contrib to install it. Results After training for 500k time-steps, the agent learns to choose jobs moderately well. If you run the inference script, you can use the experimenter to compare a random policy to a trained agent:
View full article
FlexSim's Webserver is a query-driven manager and communication interface for FlexSim. It allows you to run FlexSim models through a web browser like Google Chrome, FireFox, Internet Explorer, etc. Since the FlexSim Web Server is a basic service to allow FlexSim to be served to a browser, you may decide you want a way to proxy to this service through a full service web server that you can control security and authentication through. This guide will walk you through proxying to the FlexSim Web Server through Nginx web server. Install the FlexSim Web Server Program Download and install the FlexSim Web Server from https://account.flexsim.com Edit C:\Program Files (x86)\FlexSim Web Server\flexsim webserver configuration.txt Change the port from 80 to 8080 Start the FlexSim Web Server by double clicking flexsimserver.bat Test the server by going to http://127.0.01:8080 It should look like this: Install Nginx Reverse Proxy From a browser, visit http://nginx.org/en/download.html Download latest stable release for Windows Extract the downloaded nginx-<version>.zip Rename the unzipped nginx-<version> folder to nginx Copy the nginx folder to C:\ Double click the C:\nginx\nginx.exe file to launch Nginx Test Nginx by going to http://127.0.0.1 It should look like this: Configure Nginx to proxy to the FlexSim Web Server Open C:\nginx\conf\nginx.conf in a text editor Find the section that says: location / {    root html;    index index.html index.htm; } Edit out the root and index directives and add a proxy_pass directive so it appears like this: location / {       proxy_pass http://127.0.0.1:8080;    #root html;    #index index.html index.htm; } Save the nginx.conf file Reload Nginx to Apply the Changes Open a command line window by pressing Windows+R to open "Run" box. Type "cmd" and then click "OK" From the command line windows, type the following to change to the nginx directory: C:\nginx>cd C:\nginx and press enter Now, type the following to reload Nginx: C:\nginx>nginx -s reload Test the FlexSim Web Server Being Proxied by Nginx From a browser window again go to http://127.0.0.1 You should now see the FlexSim Web Server interface proxied through Nginx Now that you have the FlexSim Web Server proxied through Nginx, you may decide you want to configure Nginx to handle security, authentication and customization. Since this is out of the scope of this guide, you can find details on the Internet that can guide you to setting these customizations up. A few resources you may consider: https://forum.nginx.org https://stackoverflow.com
View full article
ContinuousUtilization_4.fsm In the attached model, you'll find a Utilization vs Time chart driven by a Statistics Collector and a Process Flow. This article is an alternative approach to the chart shown here: https://answers.flexsim.com/content/kbentry/158947/utilization-vs-time-statistics-collector.html Thanks @Felix Möhlmann for putting that article together. The concept in this model is very similar, but this is a different approach. There are pros and cons to each method. This method offers some performance improvements at the cost of writing more FlexScript code. This method also doesn't handle Warmup time, where the other method does. The general idea in this model is to keep a history of state changes for the previous hour (or other time interval). The history adds new info as states continue to change and drops old info as it "expires" by being older that the time interval. I chose to use a Bundle (stored on a token label) for the history. Bundles are optimized to add data to the end. If a bundle is paged (they are by default), then they are also optimized to remove data from the beginning. [Begin Technical Discussion - TLDR; Bundles are fast at removing the first row and adding new rows] A paged bundle keeps blocks of memory (pages) for a certain number of rows. If a page fills up, it allocates another page. It keeps track of the location in the page for the next entry. Similarly, it keeps track of the location of the first entry on the start page. If you remove the first row, that start location is moved forward on the page. If the start location gets to the end of a page, that page is dropped and the start location moves to the next page. [End Technical Discussion] The Process Flow maintains the history table. Whenever the object's state changes, the Process Flow adds a new entry to the history table. This part is straightforward. The tricky part is properly "expiring" the data. To do this, a second set of tokens wait for the last of the oldest data to expire. When the oldest set of data should expire, the tokens remove any rows that are too old and then wait for the next oldest row. If there are no expired rows, the tokens wait for one "time interval" and then check again. This is because if there is no expired data, data can't expire for at least one time interval. In this way, the history table is always kept up to date. The Statistics Collector is configured to post-process the history table. It calculates the total utilization, including the state that the object is currently in, as well as the state being "phased out" from the history table. The Statistics Collector can do this calculation at any point in the model. So the sampling interval (called Resolution in the model) is independent from the time window.
View full article
Good practice to reduce variance when experimenting is to separate streams of things that might vary in the model so that the random sampling is independent. An example might be that you have a number of processors who are members of the same breakdown profile (MTBF/MTTR object) where the individual breakdowns are dependant on the state of the processor. If during one scenario a processor is used more than before then it may sample the duration and next breakdown earlier, and therefore change the sequence with the other machines sampling of breakdown times, increasing variance. This is because the default setting for the MTBF time fields are using 'getstream(current)' - which means a single stream for the MTBF object, shared across all members. You could try to change this in the MTBF by using 'getstream(involved)' where 'involved' refers to the breakdown member machine. This causes other problems since if you're sampling processing times using the machine's stream too, then the amount of items processed will again change the breakdown times samples. You may judge this to be acceptable, but in a ideal world you'd still want separate streams and may want multiple streams for setup, processing, breakdowns, or subsystem failures. One way to accomplish this is by changing the way getstream() works such that it can generate a stream for any value you pass to it. That might be an object, as the current getstream() accepts, or it could be the string name of the object or it's path. It could also be an array which then opens a number of possibilities: In a breakdown you could replace getstream(current) with getStream([current,involved])   //generates a unique stream number for the MTBF/machine pair* In an Object Process Flow you could replace getstream(activity) with: getStream([current,activity]) // generates a unique stream for the instance and activity pair and works for the general process flow too. For a processing time on a processor instead of getstream(current) you could use getstream([current,"Processing"]) and getstream([current,"Setup"]) to generate two seperate sampling streams. The attached library contains an auto-installing user command that overrides getstream() to provide this functionality. The stream values save with the model. getStream-byvariant3.fsl * This implementation does have some limitations since during an experiment it does not communicate back the master model when trying to create new streams. For this reason you'll want to try and have all possible streams set up before running an experiment or avoid the type of actions that dynamically create the requirement for new streams - so that might be keeping all possible fixed resources and task executers, and hiding/removing them from groups rather than destroying them as the OnSet options of the parameters table do currently. Alternatively if you consistently name the dynamically created instances then the MTBF stream expression could be: getStream([current, involved.name]) Update: I've edited this post and library to use getStream (capital 'S') since the override parameter (var thing) doesn't stay in place and eventually causes FlexScript build errors. So with the updated library you'll need to find/replace from the model tree 'getstream' with 'getStream'.
View full article
FlexSim 2018 includes functionality for creating a custom table view GUI using a custom data source from within FlexSim. This article will go through the specifics of how to set this up. The Callbacks Custom Table Data Source defines callbacks for how many rows and columns a table should display, what text to display in each of the cells, if they're read only etc. If you require further control of your table you can use a DLL or module and sub class the table view data source in C++. An example of how this data source is used can be seen in the Date Time Source activity properties. In the above table, the data being used to display both of these tables is exactly the same. The raw treenode table data can be seen on the right. The table on the left is displaying the hours and minutes for each start and end time rather than the the start time and duration in seconds. In FlexSim 2018, there are a number of tables that currently utilize this new data source. They can be found at: VIEW:/modules/ProcessFlow/windows/DateTimeArrivals/Table/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/StatePieOptions/SplitterPane/States/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/CompositeStatePieOptions/SplitterPane/States/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/StateBarOptions/SplitterPane/States/Table Copying one of these tables can be a good starting point for defining your own table. The first thing you have to do in order for FlexSim to recognize that your table is using a custom data source is to add a subnode to the style attribute of your table GUI. The node's name must be FS_CUSTOM_TABLE_VIEW_DATA_SOURCE with a string value of Callbacks. Set your viewfocus attribute to be the path to your data. This may just be a variable within your table. It's up to you to define what data will be displayed. If you want to have the default functionality of the Global Table View, you can use the guifocusclass attribute to reference the TableView class. This gives you features like right click menus, support for cells with pointer data (displays sampler), tracked variables (displays edit button) and FlexScript nodes (displays code edit button). If you're going to use this guiclass, be sure to reference the How To node (located directly above the TableView class in the tree) for which eventfunctions and variables you can or should define. At this point you're ready to add event functions to your table. There are only two required event functions. The others are optional and allow you to override the default functionality of the table view. If you choose not to implement the optional callback functions, the table view will perform its default behavior (whether that's displaying the cell's text value, setting a cell's value, etc). These event function nodes must be toggled as FlexScript. The following event functions are required: ---getNumRows--- This function must return the number of rows to display in the table. 0 is a valid return value. param(1) - View focus node ---getNumCols--- This function must return the number of columns to display in the table. 0 is a valid return value. param(1) - View focus node The following event functions are optional: ---shouldDrawRowHeaders--- If this returns 1, then the row headers will be drawn. The header row uses column 0 in the callback functions. param(1) - View focus node ---shouldDrawColHeaders--- If this returns 1, then the column headers will be drawn. The header column uses row 0 in the callback functions. param(1) - View focus node ---shouldGetCellNode--- Allows you to decide whether you want to override the table view's default functionality of getting the cell node. param(1) - The row number of the displayed table param(2) - The column number of the displayed table If shouldGetCellNode returns 1 then the following function will be called: ---getCellNode--- Return the node associated with the row and column of the table. param(1) - The row number of the displayed table param(2) - The column number of the displayed table ---shouldSetCellValue--- Allows you to decide whether you want to override the table view's default functionality of setting a cell's value. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldSetCellValue returns 1 then the following function will be called: ---setCellValue--- Here you can set your data based upon the value entered by the user. param(1) - The row number of the displayed table param(2) - The column number of the displayed table param(3) - The value to set the cell ---isCustomFormat--- Allows you to decide whether you should define a custom text format for the cell. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If isCustomFormat returns 1 then the following functions will be called: ---getTextColor--- Return an array of RGB components that will define the color of the text. Each component should be a number between 0 and 255. [R, G, B] for example red is [255, 0, 0]. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The text to be displayed in the cell param(5) - The desired number precision ---getTextFormat--- Return 0 for left align, 1 for center align and 2 for right align. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The text to be displayed in the cell param(5) - The desired number precision ---shouldGetCellColor--- Allows you to decide whether you should define a color for the cell's background. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldGetCellColor returns 1 then the following function will be called: ---getCellColor--- Return an array of RGB components that will define the cell's background color. Each component should be a number between 0 and 255. [R, G, B] for example red is [255, 0, 0]. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table ---shouldGetCellText--- Allows you to decide whether you want to override the table view's default functionality of getting a cell's text. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldGetCellText returns 1 then the following function will be called: ---getCellText--- Return a string that is the text to display in the cell. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The desired number precision param(5) - 1 if the cell is being edited, 0 otherwise ---shouldGetTooltip--- Allows you to decide whether a tooltip should be displayed when the user selects a cell in the table. param(1) - The cell node as defined by getCellNode param(2) - The row number of the selected cell param(3) - The column number of the selected cell If shouldGetTooltip returns 1 then the following function will be called: ---getTooltip--- Return a string that is the text to display in the tooltip. param(1) - The cell node as defined by getCellNode param(2) - The row number of the selected cell param(3) - The column number of the selected cell ---isReadOnly--- Return a 1 to make the cell read only. Return a 0 to allow the user to edit the cell. param(1) - The row number of the displayed table param(2) - The column number of the displayed table
View full article
The attached model provides an example of how to record and display overtime hours worked by staff in a small clinic. The following screen capture of the model after a 30 day run shows the various dashboards displaying information about the working hours of the staff in the model. The code snippet shown for the Activity Finished Trigger of the final exit activity in the patient track is used to record information on a couple global variables and then trigger a recording event on the OT_DataCollector. The code snippet is documented in detail, so hopefully you will understand what is being done. As you can see by the Data Collector properties window, nothing special is going on there except to record the two global variables named GV_Overtime_Hours and GV_TotWork_Hours. The third and final piece to the puzzle is to create some User Defined dashboard widgets to display the information captured by the data collector in a few different ways. clinic-overtime-example-with-custom-data-collector.fsm
View full article
Attached is an example DLL Maker project that calls a Java function using the Java Native Interface (JNI) from C++. The steps for how this works are outlined below: 1. Install the Java Development Kit (JDK). In creating this example, I used OpenJDK installed via Microsoft's special Installer of Visual Studio Code for Java developers. 2. Write a Java program. In my example, I created a simple Hello class with an intMethod() as described in IBM's JNI example tutorial. 3. Compile the .java file into a .class file. I did this using the Command Prompt and executing the following command: javac Hello.java 4. Configure the DLL Maker project to include the JNI library: In VC++ Directories > Include Directories, add the following two directories: C:\Program Files (x86)\Java\jdk1.8.0_131\include C:\Program Files (x86)\Java\jdk1.8.0_131\include\win32 In Linker > Input > Additional Dependencies, add jvm.lib In Linker > Input > Delay Loaded Dlls, add jvm.dll 5. Include jni.h in your code. (Line 51 of mydll.cpp) 6. Create a JVM in your code. (Lines 61-102 of mydll.cpp) 7. Get a handle to the method you want to call and then call it. (Lines 118-126 of mydll.cpp) 8. Connect a User Command in FlexSim to fire that C++ code that executes Java code. See the dll_maker_test_model.fsm included in the attached zipped directory. * Note: this example was built with 32-bit FlexSim because the JDK I installed was 32-bit. Using a 64-bit JVM is beyond the scope of this simple example and is left as an exercise for the user.
View full article
In version 2018 and forward, you can make this chart using a Chart Template. You can simply drag and drop the chart from the dashboard library. This article may help you understand how the chart template works. You can use the Install button on the chart template to view the Process Flow, Statistics Collector, and Calculated Table that make the chart. This article reviews how to use the Zone, along with the Statistics Collector, to create a bar chart of the current work-in-progress (WIP) by item type. The method scales to as many types as you need (this example uses 30 types), and can easily adapted to text data, like SKU. An example model ( zonecontentdemo.fsm) demonstrates this method. Creating the Process Flow To create this chart, we first need to gather the data for this chart. In this case, it is easiest to build off the capabilities of the Zone. In particular, we will use Zone Partitions to categorize all of our object. After we set up the Zone, we'll use a Statistics Collector to gather the data that we need. Create a new General Process Flow. You can have as many General Process Flow objects as you want, so let's one that just deals with gathering statistics. This way, gathering statistics will not interfere with the logic in our model. The process flow should look something like this: Here's how it works. The Listen to Entry is configured to listen to a group of objects. In this case, the group contains all the sources in the model, and it's listening to the OnExit of the sources. However, it could be OnEntry or OnExit of any group of 3D objects. If you want to split the statistics by Type or SKU, then any flowitem that reaches the entry group already has the appropriate labels. In this example model, when a flowitem leaves any source, a token gets created. The token makes a label called Item that stores a reference to the created item, as shown in the following picture. The next step is to link the flowitem with the token that represents it. The Link Token to Item is configured like this: Now, the Item has a label that links back to the token. The token then enters a zone. The Zone is partitioned by type: At last, the token comes to a decide activity. The decide is configured not to release the token. The token will be released by the second part of the flow. Once the token is released, it exits the zone, and goes to a sink. The second part of the flow also has an event-triggered source, that is configured to listen to all the sinks in the model. Again, the entry objects and exit objects are arbitrary; you can gather data for the entire model, or for just a small section of the model, using this method. The event triggered source also caches off the item in a label. At this point, we need to release the token that was created when items entered the system. To do this, the Release Token activity is configured as seen here: The token created on the exit side has a reference to the item, which has a reference back to the token created on the entry side. We release this token to 1, which means connector 1. Note: We could have used a wait for event activity in the zone, and then used the match label option to wait for the correct item to leave the system. However, this method is much, much faster, especially as the number of tokens grows. Creating the Bar Chart Statistics Collector The next step is to create a statistics collector that gathers data appropriate for a bar chart. Note that this method will grow the number of rows dynamically, so that it won't matter how many types (or SKUs) your model has; you will still get one bar per type/SKU. In order to make the number of rows dynamic, we need to listen to the OnEntry and the OnExit of the zone activity: Notice the shared label on this collector. Because this label is shared, both the OnEntry and OnExit events will create this label on the data object. The value of this label is the item's type. Next, we move to the Data Recording tab. Set the Row Mode to Unique Row Values, and set the row value to Partition. This means that whenever an event fires, the statistics collector will look at the partition label on the data object. If the value is new to the statistics collector, the collector will make a new row for this value. If not, then the collector will use the row that is already present. Finally, we need to make our columns. We only need two columns: one for the Partition, and one for the Content of that partition. Both of these columns can use the Integer storage type, and raw display format. However, if your partition value was text-based, like an SKU, you should use the String storage type. The value for Partition is just the data object's Partition label: Notice that the Update option is set to When Row is Added. This way, the statistics collector knows that this value will not change, and that it's available at the time the row is created. The other column is a little harder, because we need to use the getstat command: The getstat command arguments depend on the stat you are trying to get. In this case, we are asking the zone (current, the event node) for the Partition Content statistic. We want the current value. Since this is a process flow activity, we pass in the instance as the next argument. Finally, we pass in which partition we want to get the data from, the row value. In this case, we could have identically passed in data.Partition. Also, notice that this column is updated by event dependency. To make sure this does what we want, we need to edit the event/column dependency table. We want the Content column to be updated when items enter and exit, so it should look like this: Now, open the table for the statistics collector. You should see two columns. When you run the model, rows will be added as items of different types are encountered. The table will look something like this: This screenshot came from early in the model, before all 30 types of item has been encountered, so it doesn't have 30 rows yet. Making the Bar Chart This is the easiest part. Create a new dashboard, and add a new bar chart. Point the chart at the statistics collector. For the Bar Title option, choose the Partition column. Be sure to include the Content column. Also, make sure that the "Show Percentages" checkbox on the Settings tab is cleared. The settings should look like this: The resulting chart looks something like the following image. You can set the color on the Colors tab. Ordering the Data Because the rows of this table are created dynamically, the order of the rows will likely change run to run. To force an ordering, you can use a calculated table. Since the number of rows on this table don't grow indefinitely, and the number is relatively small, it's okay to set the Update Mode on the calculated Table to always. Here's what the properties of that calculated table look like: We simply select all columns from the target collector (CurrentContent, in this case) and order it by the Partition column. That yields an ordered bar chart: Example and Additional Charts The attached example model demonstrates this method, as well as how to create a WIP By Type vs Time chart: Happy data collecting! zonecontentdemo.fsm
View full article
This article demonstrates how to use the Statistics Collector and Calculated Tables to create three utilization pie charts: a state pie chart an individual utilization pie chart a group utilization pie chart Example Model You can download that model (utilizationdemo.fsm) to see the working demonstration. The model has a Source, a Processor, a Sink, a Dispatcher, and several operators. The operators carry flow items to and from the processor, as well as operate the processor. The operators are in a group called Operators. State Pie Chart First, we need to make a Statistics Collector that collects state data for the operators. The easiest way to do that is to use the pin button to pin the State statistic for any object in the model. Use the pin button to pin a pie chart to a new dashboard. The pin button creates a new Statistics Collector, as well as a new chart. Open the properties for new Statistics Collector (double click on it in the toolbox), and change its name to OperatorStatePie. On the Data Recording tab, remove the object from the Enumerated Rows table. Using the sampler, add the Operators group (you can sample it in the toolbox). Now, when you reset and run the model, the state chart should work. Utilization Pie Chart Often, users need to combine sever states into a single value that can be used to determine the utilization of an object in the model. In order to gather this data, we can use a calculated table. Make a new Calculated Table, and give it the following query: SELECT Object, (TravelEmpty + TravelLoaded + Utilize) / Model.statisticalTime AS Busy, 1 - Busy AS NotBusy FROM OperatorStatePie This query sums the time in several states into a total, and then divides by the statistical time. Be sure to set the name of the table (the part after FROM) to the name of your Statistics Collector. Run the model for a little bit of time, and then click the Update button on the properties window. You should get a table like the following: Instead of viewing the data as just numbers, change the Display Format of each column to better represent the data. On the Display Format tab, set the Object column to display Object data. Then set the other two columns to display percentages. When you switch back to the Calculations tab, the data will be formatted: Once we get the query right, set the update mode to Always. This will updated the data in the table whenever the data is needed, including every time the chart draws. If updating the table is computationally expensive, you can use the By Interval or Manual options. Generally, a small number of rows (1-100) is small enough to use the Always mode. Regardless of the update mode, we can make a chart based on this table. In the dashboard, create a new Pie Chart. For the Data Source, select the calculated table. For the Pie Title, select the Object column. For the Center Data, select the Busy column. Be sure to include the Busy and NotBusy columns. This should show you a pie chart, comparing the operator's busy and not busy time. Group Utilization To make the final utilization chart, make a second calculated table. The query for the second table should be as follows: SELECT AVG(Busy) AS AvgBusy, AVG(NotBusy) AS AvgNotBusy FROM CalculatedTable1 Again, use the Update button to be sure the query is correct. Once it is, set the update mode to Always. Finally, you can make the pie chart for this data: Things to Try If you feel comfortable with this model, you can try a couple extra tasks, such as: Remove one of the operators from the group, reset, and run. The charts will update accordingly. Add the Processor to the group, reset, and run. The state chart should work automatically.
View full article
What is ODBC? FlexSim can use ODBC, and ODBC can connect to many different kinds of data sources, including files (like Excel) and databases. It allows you to use SQL queries to get data from any supported data source. ODBC uses a driver determine how to get data from a given data source. A driver is translator between ODBC and whatever data source you are querying. For example, there is a driver for Excel files. If you have that driver on your system, you could use ODBC to query data from Excel files. As another example, there is a driver for SAP HANA, the database for SAP. If you have that driver, then ODBC will know how to talk to SAP. This means that if the correct driver is installed, and is working correctly, that you can use FlexSim to query any ODBC data source. Using the Database Connector FlexSim has a tool called the Database Connector. You can use it to configure a connection to a database. This article uses that tool. For more information, see https://docs.flexsim.com/en/20.2/Reference/Tools/DatabaseConnectors/ As an example, let's say that you want to connect to Excel using ODBC. Assuming you have the Excel ODBC driver installed on your system, you can configure a Database Connector to look like this: Note that you must specify the full connection string. In the connection string, you can see that the driver and the file are specified. Then you can query the data in a given sheet with a query like this: SELECT * FROM [Sheet1$] You can find more information on querying Excel files at websites like this: https://querysurge.zendesk.com/hc/en-us/articles/205766136-Writing-SQL-Queries-against-Excel-files-Excel-SQL- If you use Office 365, you may need to install the Microsoft Access Database Engine 2016 Redistributable. This includes newer drivers for Excel and Access Be sure to install it with the /quiet flag on the command line. Instructions can be found in this troubleshooting guide: https://docs.microsoft.com/en-us/office/troubleshoot/access/cannot-use-odbc-or-oledb Note that FlexSim has an Excel tool, which is usually easier to use. This tool requires Excel to be installed, but does not require the ODBC driver for Excel to be installed on your computer. For more information, see https://docs.flexsim.com/en/20.2/Reference/Tools/ExcelInterface/. Excel makes a good example because most people have it, and it's easy to get the driver for it. Connection Strings Different kinds of connections require different connection strings. The following list has an example connection string for a few data sources: Excel Driver={Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)};DBQ=Path\To\Excel\File.xlsx Access Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=Path\To\Access.accdb SQLite Driver={SQLite3 ODBC Driver};Database=Path\to\sqlite.db SAP HANA DRIVER={HDBODBC};UID=myUser;PWD=myPassword;SERVERNODE=myServer:30015 Checking for Drivers Note that each connection string specifies a driver, and then additional information. The additional information depends on the driver you are using. In order to determine which drivers are on your system, you need to open the ODBC Data Sources Administrator window. To do that, hit the windows key, and then type ODBC. Then choose the option called ODBC Data Sources (64 bit). If you are running 32-bit FlexSim, open the 32 bit version. Go to the Drivers tab. Here is what my Drivers tab looks like: You can see I have drivers for Access, Excel, SQL Server, and SQLite3. I don't have drivers for SAP HANA. If I did, you'd see a driver named HDBODBC in the list. To access that kind of database, I'd need to install that driver. You can also see that the name of the driver used the the connection string must match exactly to what is shown here. Other Info You may see an exception appear when you test the connection to your database. If the view shows that the connection succeeded, then it has succeeded. The exception happens because FlexSim tries to get a list of tables from the database that it's querying. FlexSim may not guess correctly for your particular data source. That exception can be safely ignored. If you used the old db() commands in the past, consider upgrading to using the Database Connector. It will be orders of magnitude faster to read in an entire table.
View full article
One of the most powerful uses of a Fixed Resource Process Flow is to unify a set of machines and operators as a reusable entity. For example, many factories have several production lines. Simulation modelers often wonder what would happen if they could either open or close more production lines. If you were to open a new line, would the increased output be worth the cost? Or if you were to close a line, would the factory still be able to meet demand? These are the kinds of questions modelers often ask, and by using a Fixed Resource Process Flow, you can answer this question. This article demonstrates how to use a Fixed Resource Process Flow (or FR Flow) to coordinate several machines and operators as a single entity. The example model represents a staging area, where product is staged before being loaded on to a truck. However, most of the concepts that are discussed could be used in any Fixed Resource Process Flow. Creating a Collection of Objects When you set out to build a reusable FR Flow, it is usually best to start by making a single instance of the entity you want to replicate. Often, it is convenient to build that entity on a Plane. When you drag an object in to a Plane, it is owned by the Plane, which makes the Plane a natural collection. If you copy and then paste the Plane, the copy will have all the same objects as the original Plane. This makes it very easy to make another instance of your entity: just copy and paste the first. If the Plane is attached to an FR Flow, then the copy will also be attached to the same Flow, and will then behave the exact same way. In the example model, you will find a Plane with a processor, five queues, and two operators. This makes up a single staging area. The Plane itself is attached to the FR Flow, meaning an instance of the FR Flow will run for the Plane. It also means the Plane can be referenced by the value current . (You may find it helpful to set the reset position of any operators, especially if they wander off the plane at any point during the model run.) Using Process Flow Variables In order to drive the logic in your entity using an FR Flow, you will need to be able to reference the objects in each entity, so that they will be easily accessible by tokens. For example, if items are processed on Machine A, Machine B, and Machine C, in a production line, then you would want an easy way to reference those machines in each line. This is where Process Flow Variables come in. In the example model, you will find the following Process Flow Variables: Every staging area has a Packer, a Shipper, and a Palletizer, each referenced using the node command. Remember that current is the instance object, which is the Plane. Once these variables are in place, you could create an FR Flow like the following: If you ran this model with the Plane shown previously (complete with correctly named objects) and this flow with the shown variables, then the Packer operator would travel to the Palletizer. If you then copied the Plane, you would see that all Packers go to their respective Palletizers, as shown below: Using Variables and Resources Together Sometimes, you may want to adjust the number of operators per line, or even the number of operators in a specific role. To do this, you can again you Process Flow Variables. The sample model includes these variables: The sample model also includes these resources; the properties for the Shipper Resource are shown: Because this resource is Local, it is as if each attached object has its own version of this resource. Because it references a 3D operator, it will make copies of that operator when the model resets. The number of copies it creates will depend on the local value of the ShipperCount variable. To edit the value for a particular instance object, click on that object in the 3D view, and edit the value in the quick properties window. Disabling Entities Often, modelers simply want to "turn off" parts of their model. If that part of the model is controlled by a fixed resource flow, then you can easily accomplish this task. In the example model, you will find the following set of activities: The Areas resource is numeric, and it is global. The Limit Areas activity is configured so that any token that can't acquire the resource immediately goes to the Area Disabled sink. If the token that controls the process dies, then the area for that token is effectively disabled. The example model uses a Process Flow Variable to control how many areas can be active at a time: The Areas resource uses this value to control how many shipping areas are actually active. If this number is smaller than the number of attached objects, then some of the areas won't run. Note that the order the objects are attached in matters. The first attached area will be the first to generate a token in the Init Area source, and so will be the first one to acquire the area. Using Process Flow Variables in the Experimenter/Optimizer Currently, only Global and User Accessible variables can be accessed in the Experimenter. In the sample model, the only variable that can readily be used in an experiment is the TotalAreas variable, which dictates how many areas can be active. This allows the Experimenter or Optimizer to disable lines. To make it possible for the Experimenter or Optimizer to vary the number of Shippers or Packers per Area, the ShipperCount and PackerCount could be changed to Global Variables. You could put labels on the instance object (the Plane) that are read by the variable, like so: Then the experimenter could set the label value on the Plane object that is attached to the FR Flow. The Experimenter would set the label, which would affect the value of the Variable, which would affect how many copies of the Packer were created by the Packer resource. You could do the same for the ShipperCount variable. Sample Model The attached model (usingfixedresources-6.fsm) provides a working model that can demonstrate the principles in this article. It comes with only one Plane object attached to the FR Flow. Here are some things to try with the model: Reset and run the model to see how it behaves with a single area, where that single area has only one Packer and one Shipper. Create a copy (use copy/paste) of the Plane. Reset and run the model again to see how a second area is automatically driven by the same flow. You may need to adjust the TotalAreas variable if the second doesn't run. Adjust the number of Packers and Shippers (using the Process Flow Variables) on each Plane. Reset and run the model to see how adding more packers and shippers affects it. Create many copies of the first Plane (it is best to set the PackerCount and ShipperCount to 1 before copying). Reset and run the model, to observe the effect. Add or remove some staging areas (queues) to an Area. This model handles those changes as well (in the Put Stations On List part of the flow). Try to set up an experiment or optimization, that varies how many lines are active, and how many Packers and Shippers are in each. In general, play around with the sample model, or try this technique on your own. Other Applications This technique can be used to create production/packaging lines, complex machines, or any conglomerate of Fixed Resources and Task Executers. If you follow the methods outlined in this article, you will be able to create additional instances of complicated custom objects with ease. You will also be able to configure your model for use with the experimenter or optimizer.
View full article
Many simulation models need to show operator or machine utilization by time period of day, or per shift: shiftutilization.fsm Under Construction Summary Until this article is completed, here is the basic idea. I use a process flow to create one token per operator in the group. Each operator token creates one token per shift. The shift token creates its own Categorical Tracked Variable (which is how state information is stored on all objects). The shift token just remains alive, holding the tracked variable. The oeprator token goes into a loop, listening to OnChange of the 3D Operator's state profile node. Whenever the state changes, the token sets the shift token's state label to the same state. This essentially copies the state changes onto another tracked variable. Another child token listens for when the shift changes. When the shift changes, the operator token puts the current shift profile into some unused state (STATE_SCHEDULED_DOWN, in this case), and then applies new state changes to the correct shift token's label. A Statistics Collector listens for when the Operator token starts the main loop at the beginning of each shift. When this happens, the collector saves a row value that is an array, composed of the operator and the shift token. The row mode is set to unique, so that each unique combo of operator/profile gets its own row. Each column in the collector is made to show the data from the shift corresponding to that row. The result is the table shown in the article, which can be plotted with a pie chart as shown above.
View full article
In this article, you will learn how to export data from individual replications to an external database. This is an advanced tutorial, and assumes some exposure to databases and SQL. It also assumes that you are familiar with the Experimenter and/or Optimizer feature in FlexSim. This tutorial uses PostgreSQL, but should be compatible with the most popular SQL database engines. Model Description Let's assume you have a model with at least one Statistics Collector, as well as an experiment ready to go. The model used in this example is fairly simple: As the model runs, a Statistics Collector fills in a table. The table makes a new row for every item that goes into the Sink, recording the time, and the Type label of the item: For the experiment, we don't have any variables, but we do have one Performance Measure: Input of the Sink. Of course, an actual model would have many Variables, Scenarios, and Performance Measures, but this model leaves out those details. For reasons which come up later, this model also has two global variables, called g_scenario and g_replication. We will use these during the experiment phase. Connecting to the Database To connect to a database, you can use the Database Connector tool. Each Database Connector handles the connection to a single database. If you need to connect to more than one database, you will need more than one Database Connector. For this model, I have added a Database Connector, and configured the connection tab as follows: This configuration allows me to connect to a PostgreSQL database called "flexsim_test" running on my computer at port 9001. You can test the connection by clicking the "Test Connection" button. The test attempts to connect, as well as query the list of tables found in the database. Setting Up the Export Next, take a look at the Export tab: This tab specifies that we are exporting data from StatisticsCollector1 to the table called experiment_results in the database (this table should exist before you set up the export). The Append to Table box means that when the export occurs, the data will be added to the table. Otherwise, the data would be cleared from the target table. The other interesting thing here is that we are exporting more columns than the Statistics Collector has, namely the Scenario and Replication number. However, the expression in the From FlexSim Column column must be valid FlexQL (FlexSim's internal SQL language). Wrapping the values in Math.floor() leaves the values unchanged, and works well with FlexQL. Setting up the Experimenter Finally, let's take a look at just a little bit of code that makes the Experimenter dump data to a database. There is code in Start of Experiment, Start of Replication, and End of Replication: In the Start of Experiment, we need to clear the table. The code in this trigger connects to the database, runs the necessary query, and the closes the connection to the database. In the Start of Replication trigger, the code simply copies the replication and scenario values into the global variables we created for this purpose. In the End of Replication trigger, the code uses function_s to call "exportAll" on the database connector that we created. This uses the settings on the Export tab to dump the data from StatisticsCollector1 to the database table, including the replication and scenario columns. These same triggers run during an optimization, so the same logic will apply. Run the Experiment Finally, you can run the experiment. When each replication ends, the Statistics Collector data will be exported to the database. Databases can handle many connections simultaneously. This is important, because the child processes that run replications will all open individual connections to the database at the end of each replication. This leads to many concurrent connections, which most databases are designed to handle. But how can we tell that it worked? If you connect to the database with another tool, you can see the result table. Note that there are 1700 rows, and that data from Replication 5 is included. Additionally, you could use the Import tab to import all the data, or summary of the data data, into FlexSim. Conclusion Storing data in a database during an experiment is not difficult to do. There are many excellent tools for analyzing and visualizing database tables, which you could then use to further explore and understand your system. postgresqlexperimentdemo.fsm
View full article
One of the most powerful features of Process Flow is the ability to easily define a Task Sequence. However, many real-life situations require the coordination of multiple workers and machines to do a single task. This article demonstrates one approach you can use with Process Flow to coordinate multiple Task Executers, or in other words, to create a coordinated task sequence. This article talks about an example model (handoff.fsm). It might be easiest to open that model, watch it run, and perhaps read this article with the model open. The Example Scenario Here is a screenshot of the demo model used in this article: Items enter the system on the left. The yellow operator must carry each item to the queue in the middle, and then wait for the purple operator to arrive. Once the two operators are both at the middle queue, then the yellow operator can unload the box, and the purple operator can take it. After this point, the yellow operator is free to load another item from the left queue. The purple operator takes the item, waits for a while, and then puts the item in the sink on the right. The interesting part of this model is the hand off. The yellow operator must wait for the purple operator, and vice versa. This is the synchronization point, and it requires coordination of both operators. The approach used in this model allows you to add more operators to the yellow side, and more to the purple side. But it still maintains that a yellow operator must wait for a purple operator before unloading the box. The Example Model In addition to to the 3D layout shown previously, there are 5 process flows in the example model. The first is a general flow, and defines the logic for each task. The second is a Task Executor flow, and defines the logic for the yellow operator. The third is also a Task Executor flow, and defines the logic for the purple operator. The remaining two flows are synchronization flows, for synchronizing between the other three flows. Synchronizing on a Task The basic approach in this model uses the Synchronize activity. This activity waits for one token from each incoming connector, before it allows any of the tokens to move on. Here is the Yellow Purple Sync flow (a global Sub Flow) from the example model: The flows for the yellow and purple operators each use the Run Sub Flow activity to send a token to this flow, to their respective start activities (you can use the sampler on the Run Sub Flow activity to sample a specific start activity in a sub flow). This is what allows both the yellow and purple operators to wait for each other. However, it is important that the yellow and purple operators are both doing the same task. In this model, there is a token that represents each item that needs to be moved. Both operators get a reference to this task token. The Synchronize activity is set up to partition by that Task token. That means that a yellow and purple operators must both call this sub flow with the same task token, ensuring that each task has its own synchronization. In the example model, this kind of synchronization happens between operators, and it happens between each task and an operator. Basically, the task must wait for the operator to finish that operator's part. The Task Flow A task token is created every time an item enters the first queue. The tasks flow puts that task on both the Yellow and Purple lists. In both cases, the task token does not wait to be pulled, but keeps itself on the list. Then, the task token waits for a yellow operator to finish with it, and then for the purple operator to finish with it. There is a zone in this flow, but its only purpose it to gather statistics for how long the whole task took. The Yellow and Purple Flows These flows are easiest to understand when viewed side by side: Recall that each task is put on both the Yellow and Purple lists at the exact same model time. The yellow operator waits to get a task (at the Get Task activity). Then the operator travels to the first queue, gets the item, and travels to the second queue. At this point, the yellow operator waits. At the same time, the purple operator is also waiting for the task. The purple operator just has to travel to the second queue before waiting for the yellow. Once the yellow operator arrives, the purple operator also has to wait for the yellow operator to unload the box. On the yellow side, once the purple operator arrives, the yellow operator unloads the box, and then synchronizes with the purple operator, allowing the purple operator to load the box. Summary The purpose of this article is to show one method for synchronizing token in separate flows. That method is as follows: Have a token for each task. As each task executor (or fixed resource) needs to synchronize, they each use a Run Sub Flow activity, putting the token in a specific Start activity. The Sub Flow (a global Sub Flow) has a synchronize activity, that requires a token from each participant for that task before releasing the tokens. This is certainly not the only way to create this model. However, there are some advantages: By forcing the task to synchronize, you can gather stats on how long each phase of the task took, as well as how long the complete task took. You can add more yellow or purple operators by copy/paste. They simply follow their own logic Each set of logic is separated; tasks, yellow operators, and purple operators each have their own flows, making each one much simpler. The exact approach used in the example model will not work exactly as it is for each model. However, you can apply the general principles, and adapt them to your own situation.
View full article
Top Contributors