FlexSim Knowledge Base
Announcements, articles, and guides to help you take your simulations to the next level.
Sort by:
Hi everyone, recently @Arun Kr posted this idea to add a utilization vs. time chart to the available chart types in FlexSim. I had previously build a relatively easy to set up Statistics Collector for use in our models. I have since cleaned up the design a bit and thought to post it here, since this seems to be a commonly desired feature. utilization_vs_time_collector_24_0_fm.fsm All necessary setup is done through labels of the collector. The first three are actually identical to labels found on the default Statistics Collector behind a state bar chart. - Objects should point at a group that contains all objects the collector should track the utilization for. - StateTable is a reference to the state table that will be used to determine which state counts as 'utilized'. - StateProfile is the rank of the state profile that should be read on the linked objects (0 for default state profile). - MeasureInterval is the time frame (in model units) over which the collector will take the average of the utilization. - NumSubIntervals determines how often that measurement is actually taken. In the example image above (and the attached model) the collector measures the average utilization over the last 3600s 12 times within that interval of every 300s. Each meausurement still denotes the utilization over the complete "MeasureInterval". The graph on the left takes a measurement every 5 minutes, the one on the right every 60 minutes. Each point on both graphs represents the average utilization over the previous hour from that time point in time. - StoredTimeMap is used to allow the collector to correctly function past a warmup time, by storing the total utilized value of each object up to that point. This should no be manually changed. Since this last label has to be automatically reset, remember to save any changes made to the other labels by hitting "Apply". The collector works by keeping an array of 'total utilized time' value for each object as row labels. Whenever a measurement is taken, the current value is added to the array and the oldest one is discarded. The difference between the newest and oldest value is used to calculate the average utilization over the measurement interval. The "NumSubIntervals" label essentially just controls how many entries are kept in that array. To copy the collector into another model you can create a fresh collector in the target model. Then copy the node of this collector from the tree of the attached model and paste it over the node of the fresh collector. I hope this can help to speed up the modeling process for some people (at least until a chart like this is hopefully implemented in FlexSim) or serve as inspiration for how one can use the Statistics Collector. I might update the post with a user library version if I get to creating it (and if there is demand for it). Best regards Felix Edit: Added user library with the collector as a dragable icon to the attached files. Edit2: I noticed a bug while using the collector. Having the tracked objects enter states that are marked as "excluded" in the state table would lead to incorrect utilization values (possibly even below 0 or above 100%). Replaced the library with an updated version that fixes this. Edit3: I fixed another bug that resulted in a wrong utilization value for the first measurement after the warmup time if the object spent time in an excluded state prior to the warmup. utilization-vs-time-collector-library-20250212.fsl
View full article
This is a demo model for the new warehouse functionality found in version 2019 Update 2: warehouse-demo-model.fsm The basic premise of this model is that items of a particular type come in, and must be placed in slots for that type. Orders also come in, requiring items of a particular type, that must be retrieved from storage. The model is meant to be a general concept model. It demonstrates the use of many of the new features in 19.2, and embodies some high-level "how-to's" of warehousing that are discussed in the user manual. Most logic for the model is implemented in a process flow. The process flow logic is separated into three main categories, namely initial inventory, inbound, and outbound processes. Further, the outbound process demonstrates both random-based order generation as well as history-based order generation. Initial Inventory The model includes a Global Table of Initial Inventory. The process flow's initial inventory section reads this table, and then creates items and places them into slots based on that initial inventory. This logic relies on the Address Scheme defined in the Storage System object, and uses direct addressing to get a slot using Storage.system.getSlot(). Inbound I use the process flow to assign a slot to each incoming item. I use an Assign Labels activity called Find Slot to do this. This uses a pick list option that wraps a call to Storage.system.findSlot(). The query matches the Type of the item with the Type of the slot, and also ensures that the target slot has space to fit the incoming item. The query also randomizes the order. Randomizing the order would likely not be necessary in most situations, but it makes the demo look nice. If the Find Slot activity properly finds a slot to store the item, then I go ahead and assign the the item to that slot, and have an operator place it in the rack. Outbound I also use the process flow to generate orders, and to reserve items in the storage system for those orders. In most warehouse simulations, order generation can be driven in two ways. First, you can use random probability distributions to generate orders based on general throughput metrics. Second, order generation can be based on historical data. This model gives an example of each method. In the random method, orders are generated randomly every ~30 seconds. Each order includes a number of SKU line items (again, random) and each line item includes a quantity of that SKU (again, random). Order tokens spawn line item tokens, which in turn spawn tokens associated with individual picks (the Fill Out Individual Picks process). For each pick, the token finds an item in storage that matches the target SKU. This is an Assign Labels activity (Find Item by SKU) with a pick option that wraps a call to Storage.system.findItem(). It finds an item that matches the required type, again using a query. Once the item is found, it makes reserves the item as "outbound" by assigning the Storage.Item.assignedSlot property to null (Set to Outbound activity). This ensures that no other process will find that same item for picking. The history-based order generation process uses much of the same functionality as the random-based, but it instead reads an "OrderHistory" table to determine when orders are started and what those orders contain. The OrderHistory table represents a simplified format for what you would likely see in a standard orders table. First, the process flow creates a transformed table that aggregates each order into a single row (this could technically be done as post-import code, but I do it in the process flow for visibility). Then the process flow loops through that transformed table, waiting for the start time of each order, then spawning that order. Custom Rack Visualization I have also customized the visualization of the racks. I have added a text to the front (and bottom on the floor) of a rack slot that will show the address of that slot. Further I've given the text a background that is color-coded to the SKU that that slot is designated to store. This was all done through the Storage System's Visualizations tab. I customized the Rack visualization.
View full article
This model and library will allow you to produce a heat map of anything moving in the model - including AGVs and Flowitems. To add this to a model is simply a matter of : 1) Load the attached user library 2) Add objects to the group HeatMapMembers 3) Drop a heat map object (cylinder) into the model - reset and run. With this updated version you can now you can now have multiple mapper objects in the same model showing different groups - made easier by the addition of a 'groupName' label on the mapper. You can easily change the height at which the map is draw using the 'zdraw' label and alter the sampling interval and grid size using the 'heatInterval' and 'resolution' labels. The resolution is the number of divisions per model length unit. In the example model set to metres, a value of 2 gives 4 divisions per square metre. Currently non-flowitems are set to ignore time when the object is in an idle state. HeatMapAnything.fsl HeatMapAnything.fsm
View full article
The attached model contains a basicTE to mimic some operations of a Tower Crane. You should be able to use it like any other task executer. Labels on the crane allow the speeds and operating heights to be altered. To change the jib/beam length use the label parameter and it will apply at reset. Similarly, to change the height for now just change the tower height and press reset to have the rest attached at the correct height. TowerCrane_basicTEexample.fsm Update: Added a user library that will scale the crane based on the model units. Also changed some labels so that rotational speed is specified there and the jib/beam now uses the object properties for max speed and acceleration. TowerCrane.fsl
View full article
The attached model contains functionality to depict the item flow as a 3D map using a FlowMapper3D Object (cylinder) and an associated Object Process Flow. Additionally a 'kpi' label on the object gives an indication of layout performance to which you can link and observe as you interact/experiment on the layout. To set this up in your model you'll need to add a Group of objects whose entry events will be used by the mapper - calling that Group "FlowMapperObjects". Then you'll need to add a ColorPalette called "HeatPalette". Finally you'll want to copy the FlowMapper3D object and the FlowMapperProcess to your model. Note that there is a boolean label 'showPercents' on the FlowMapper3D object to tell it whether to show percentage text or the number of flowitems for each location pair. 3DFlowMapper.fsm
View full article
Attached is an example model and user library comprising commands to return an array of objects whose bounding boxes intersect, and a Collision Detection object to drop into your model. The Collision Detection has a ticker interval label to adjust the frequency of checks and will switch the colliding objects to selected. It looks for two groups - "Obstacles" containing static objects in the scene (which may be overlapping and not recorded as collisions) and "Colliders" which are the objects navigating the scene and should be checked for intersecting bounding boxes. In the example model I'm adding the flowitem when it is created using Group("Colliders").addMember(item) The detector code is on its FlexScript label, 'analyseScene', which is first scheduled to run by the object's reset trigger. collisionDetection3.fsm BBCollisionDetection2.fsl
View full article
FlexSim 2022 introduced a Reinforcement Learning tool that enables you to configure your model to be used as an environment for reinforcement learning algorithms. That tool makes connecting to FlexSim from a reinforcement learning algorithm easier, but that tool is not absolutely necessary for this type of connectivity. The same socket communication protocols that are used by that tool are available generally in FlexScript. Attached (ChangeoverTimesRL_V22.0.fsm) is the FlexSim 2022 model that you build as part of the Using Reinforcement Learning documentation that walks you through the process of building and preparing a FlexSim model for reinforcement learning, training an agent within that model environment, evaluating the performance of the trained reinforcement learning model, and using that trained model in a real production environment. Also attached (ChangeoverTimesRL_V6.0.fsm) is a model built with FlexSim 6.0.2 from 2012 that does the exact same thing, but with custom FlexScript user commands instead of the Reinforcement Learning tool. You can use this model with the example python scripts and FlexSim 6.0.2 in the same way that you can use the other model with those same scripts in FlexSim 2022. I'm providing this FlexSim 6 model as an example that demonstrates how you can communicate between FlexSim and other programs. The Reinforcement Learning tool certainly makes this type of communication easier and simpler, with a nice UI for specifying RL-specific parameters, but the fundamental principles of how this works have been available in FlexSim for many years using FlexScript. Hopefully this example can help teach and inspire those who wish to control or communicate with FlexSim from external sources for purposes other than just reinforcement learning. FlexSim is flexible, and the possibilities are endless.
View full article
In the attached model we use a Timetable and two MTBF/MTTR objects to define Schedule Loss, Availability Loss (breakdowns) and an element of Performance loss due to short stops (state Down). The processor sends 'bad' items to port 2 based on the send to percentage which account for QualityLosses. The processor's 'best' processing time per part (5 seconds) is stored as a label, while the processing time itself is a triangular distribution with the minumum as 5 seconds - so it also contributes to performance loss. When the Type of the item changes a setup time occurs which is the final contributor to performance loss. Two state profiles were added to the processor - one to track production time and another for availability. An object process flow on the processor detects production profile state changes (between On and off shift) and regular Flexsim state changes and determines the availability state that should prevail. A user command getOEEstat is used to access the values which it calculates on demand and stores in a label on the processor called statsMap. The syntax for this command is: getOEEstat(myMachine,"OEE") The list of stats: "ScheduleLoss","AvailabilityLoss","PerformanceLoss","QualityLoss","IdealProdTime","AvailabilityRatio", "QualityRatio","PerformanceRatio", "IdealProdTime", "RunTime", "OEE", "TEE". A group was used to indicate which objects have their OEE tracked, and a stats Collector reads the group members and adds rows at reset. Finally Performance Measures were added for the stats for processor 1. Processor_OEE_2.fsm 2023-08-22 Update: Added 'TEE' stat.
View full article
FlexSim 2018 includes functionality for creating a custom table view GUI using a custom data source from within FlexSim. This article will go through the specifics of how to set this up. The Callbacks Custom Table Data Source defines callbacks for how many rows and columns a table should display, what text to display in each of the cells, if they're read only etc. If you require further control of your table you can use a DLL or module and sub class the table view data source in C++. An example of how this data source is used can be seen in the Date Time Source activity properties. In the above table, the data being used to display both of these tables is exactly the same. The raw treenode table data can be seen on the right. The table on the left is displaying the hours and minutes for each start and end time rather than the the start time and duration in seconds. In FlexSim 2018, there are a number of tables that currently utilize this new data source. They can be found at: VIEW:/modules/ProcessFlow/windows/DateTimeArrivals/Table/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/StatePieOptions/SplitterPane/States/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/CompositeStatePieOptions/SplitterPane/States/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/StateBarOptions/SplitterPane/States/Table Copying one of these tables can be a good starting point for defining your own table. The first thing you have to do in order for FlexSim to recognize that your table is using a custom data source is to add a subnode to the style attribute of your table GUI. The node's name must be FS_CUSTOM_TABLE_VIEW_DATA_SOURCE with a string value of Callbacks. Set your viewfocus attribute to be the path to your data. This may just be a variable within your table. It's up to you to define what data will be displayed. If you want to have the default functionality of the Global Table View, you can use the guifocusclass attribute to reference the TableView class. This gives you features like right click menus, support for cells with pointer data (displays sampler), tracked variables (displays edit button) and FlexScript nodes (displays code edit button). If you're going to use this guiclass, be sure to reference the How To node (located directly above the TableView class in the tree) for which eventfunctions and variables you can or should define. At this point you're ready to add event functions to your table. There are only two required event functions. The others are optional and allow you to override the default functionality of the table view. If you choose not to implement the optional callback functions, the table view will perform its default behavior (whether that's displaying the cell's text value, setting a cell's value, etc). These event function nodes must be toggled as FlexScript. The following event functions are required: ---getNumRows--- This function must return the number of rows to display in the table. 0 is a valid return value. param(1) - View focus node ---getNumCols--- This function must return the number of columns to display in the table. 0 is a valid return value. param(1) - View focus node The following event functions are optional: ---shouldDrawRowHeaders--- If this returns 1, then the row headers will be drawn. The header row uses column 0 in the callback functions. param(1) - View focus node ---shouldDrawColHeaders--- If this returns 1, then the column headers will be drawn. The header column uses row 0 in the callback functions. param(1) - View focus node ---shouldGetCellNode--- Allows you to decide whether you want to override the table view's default functionality of getting the cell node. param(1) - The row number of the displayed table param(2) - The column number of the displayed table If shouldGetCellNode returns 1 then the following function will be called: ---getCellNode--- Return the node associated with the row and column of the table. param(1) - The row number of the displayed table param(2) - The column number of the displayed table ---shouldSetCellValue--- Allows you to decide whether you want to override the table view's default functionality of setting a cell's value. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldSetCellValue returns 1 then the following function will be called: ---setCellValue--- Here you can set your data based upon the value entered by the user. param(1) - The row number of the displayed table param(2) - The column number of the displayed table param(3) - The value to set the cell ---isCustomFormat--- Allows you to decide whether you should define a custom text format for the cell. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If isCustomFormat returns 1 then the following functions will be called: ---getTextColor--- Return an array of RGB components that will define the color of the text. Each component should be a number between 0 and 255. [R, G, B] for example red is [255, 0, 0]. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The text to be displayed in the cell param(5) - The desired number precision ---getTextFormat--- Return 0 for left align, 1 for center align and 2 for right align. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The text to be displayed in the cell param(5) - The desired number precision ---shouldGetCellColor--- Allows you to decide whether you should define a color for the cell's background. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldGetCellColor returns 1 then the following function will be called: ---getCellColor--- Return an array of RGB components that will define the cell's background color. Each component should be a number between 0 and 255. [R, G, B] for example red is [255, 0, 0]. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table ---shouldGetCellText--- Allows you to decide whether you want to override the table view's default functionality of getting a cell's text. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldGetCellText returns 1 then the following function will be called: ---getCellText--- Return a string that is the text to display in the cell. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The desired number precision param(5) - 1 if the cell is being edited, 0 otherwise ---shouldGetTooltip--- Allows you to decide whether a tooltip should be displayed when the user selects a cell in the table. param(1) - The cell node as defined by getCellNode param(2) - The row number of the selected cell param(3) - The column number of the selected cell If shouldGetTooltip returns 1 then the following function will be called: ---getTooltip--- Return a string that is the text to display in the tooltip. param(1) - The cell node as defined by getCellNode param(2) - The row number of the selected cell param(3) - The column number of the selected cell ---isReadOnly--- Return a 1 to make the cell read only. Return a 0 to allow the user to edit the cell. param(1) - The row number of the displayed table param(2) - The column number of the displayed table
View full article
Sometimes data exists in Google Sheets that needs to be brought in to FlexSim. There are multiple ways to do this, discussed in this article. Copy and Paste This is the easiest method to get data from Google Sheets into FlexSim. Here's how it works: Open the desired sheet in your browser Click the top-left corner to select everything. Copy the data (use ctrl-C) Open FlexSim Create a Global Table if you haven't already Ensure the number of rows and columns in the Global Table is large enough to hold the pasted data. Click on the column header for the first row in the Global Table. Paste the data (use ctrl-V) Pros: Quick, easy Cons: Need to resize the global table correctly beforehand, repeat entire process if data changes. Export/Import via CSV This is also any easy method to get data. Here are the steps: Download your sheet as a csv file. In FlexSim, use the importtable() command to dump the csv into the global table. For example: importtable(Table("GlobalTable1"), "data.csv", 1) You could add this code to your model's OnReset trigger if desired. Pros: Quick, table sized to csv data automatically Cons: Repeat downloading csv if the data changes. Export/Import via XLSX You can also download a google spreadsheet as an Excel file. Then you can use the Excel importer as normal. Pros: Quick, table sized to data automatically, many options for configuring Cons: Repeat downloading xlsx file if the data changes Import via Python This method is more advanced and requires some configuration for the model and your Google account. Once complete, however, changes can be pulled in automatically without any manual steps. Follow the Sheets quickstart for python found here: https://developers.google.com/sheets/api/quickstart/python Following this guide walk you through creating a Google Cloud Project and creating credentials for that project. In addition, consider using this modified python file instead. This file creates a get_values method that the model can call, and that method is also called from main(), so it's easy to test in a python debugger: import os.path from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.errors import HttpError # If modifying these scopes, delete the file token.json. SCOPES = ["https://www.googleapis.com/auth/spreadsheets.readonly"] # The ID and range of a sample spreadsheet. SAMPLE_SPREADSHEET_ID = "----- add your sheet's ID here -------------" SAMPLE_RANGE_NAME = "A1:B" def get_values(): """Shows basic usage of the Sheets API. Prints values from a sample spreadsheet. """ creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists("token.json"): creds = Credentials.from_authorized_user_file("token.json", SCOPES) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( "credentials.json", SCOPES ) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open("token.json", "w") as token: token.write(creds.to_json()) try: service = build("sheets", "v4", credentials=creds) # Call the Sheets API sheet = service.spreadsheets() result = ( sheet.values() .get(spreadsheetId=SAMPLE_SPREADSHEET_ID, range=SAMPLE_RANGE_NAME, valueRenderOption="UNFORMATTED_VALUE") .execute() ) values = result.get("values", []) return values except HttpError as err: return [] def main(): values = get_values() if not values: print("No data found.") return for row in values: print(row) if __name__ == "__main__": main() Save the above script next to your model. Create a user command in your model. Format the user command for python and enter the file name and method name. It might look something like this: /**external python: */ /**/"sheets"/**/ /** \nfunction name:*/ /**/"get_values"/**/ The return type of the command should be var which means any Variant type. Use code like the following to clone the data to a global table: Array values = getValues(); // call the user command. Array colHeaders = values.shift(); for (int i = 1; i <= values.length; i++) { Array row = values; row[0] = nullvar; } Table(values).cloneTo(Table("GlobalTable1")); Add the above code to a reset trigger. Pros: automatic once complete, easy to keep data up-to-date Cons: requires complicated setup, some python coding. The script could be adjusted to download additional ranges, and then return all data at once, but that requires some code ability. Import via HTTPS Google recommends you use a client library to access its APIs. However, it is entirely possible to use HTTPS requests instead. This could all be done from FlexScript, with no additional installations required. Pros: done all from FlexScript, no extra installs Cons: very technical Conclusion There are several ways to extract data from Google Sheets into FlexSim. Each has pros and cons. Choose the one that best fits your circumstances. Good luck!
View full article
If you have a contiguous conveyor network you can just route items using Conveyor.sendItem() and FlexSim will guide the item to the destination, passing through inline and side transfers as required for the shortest path. If between some conveyors you use exit and entry transfers, perhaps to easily add elevators and shuttles as transports between them - then you'll normally be faced with adding logic to figure out which exit transfer to go to and which port to take from that transfer - and in a large model that logic can be extensive and hard to maintain. The attached model and library provides commands for automated routing through multiple conveyor sub-sections connected through exit/entry transfers, to conveyor points and to connected fixed resources. This means that you may no longer have to write sendTo code with case statements on each exitTransfer to determine which port an item should exit through – nor possibly need to have decision points with case logic to decide the destination for Conveyor.sendItem(). In the example model three sources create items with random destinations which are routed through the conveyor system, transfers and port automatically to arrive at the correct destinations – some of the ports having transport to perform the move. To make this work in any model you should load the user library which will auto-install a set of user commands and a General Process Flow. The first step is to run the user command ‘createAllTravelMaps()’ which will calculate all the reachable destinations (decision points, stations, pes, attached fixed resources and transfers) from all the conveyor points and entry/exit transfers) along with estimates of the conveytime (from the conveyor class). This information consolidated to create the shortest routes and is stored in a label ‘travelMap’ on each decision point, station, pe and transfer. To make use of the travelMap data there are three additional user commands supplied that are intended to be used directly by the modeller: getNextConveyPoint(thispoint, destination) – returns the next point to send an item to from this point in order to ultimately reach the destination. getConveyExitPort(exitTransfer, destination) – returns the port through which an item should exit the exitTransfer in order to reach the destination. getConveyItemsNextConveyPoint(item, destination) – returns the next point to which an item should travel to reach the destination from its current position on a conveyor. The simple process flow in the example and library is set to listen to the Group members of EntryTransfers and ExitTransfers in order to lookup the ‘destination’ label and either sends the item to the next point or in the case of the exit transfers, overrides the sendTo port with the value from the map. I’ve added some documentation to the user commands which you can access easily via the command helper: ConveyorTravelMaps_0.3.fsl ConveyorTravelMapExample.fsm You may find createTravelMaps() takes a while which is why a progress bar has been added. You may not need all points to be evaluated exhaustively so the option to pass in a flag indicating to only start evaluation from Entry Transfers is given, which will create somewhat incomplete maps for intermediate points. A future refinement would be to account for transport time from exit transfers either by recording the times or providing port list with the expected times. Clearly if you make changes to your transfer positions or conveyor layout you should rerun createAllTravelMaps.
View full article
Good practice to reduce variance when experimenting is to separate streams of things that might vary in the model so that the random sampling is independent. An example might be that you have a number of processors who are members of the same breakdown profile (MTBF/MTTR object) where the individual breakdowns are dependant on the state of the processor. If during one scenario a processor is used more than before then it may sample the duration and next breakdown earlier, and therefore change the sequence with the other machines sampling of breakdown times, increasing variance. This is because the default setting for the MTBF time fields are using 'getstream(current)' - which means a single stream for the MTBF object, shared across all members. You could try to change this in the MTBF by using 'getstream(involved)' where 'involved' refers to the breakdown member machine. This causes other problems since if you're sampling processing times using the machine's stream too, then the amount of items processed will again change the breakdown times samples. You may judge this to be acceptable, but in a ideal world you'd still want separate streams and may want multiple streams for setup, processing, breakdowns, or subsystem failures. One way to accomplish this is by changing the way getstream() works such that it can generate a stream for any value you pass to it. That might be an object, as the current getstream() accepts, or it could be the string name of the object or it's path. It could also be an array which then opens a number of possibilities: In a breakdown you could replace getstream(current) with getStream([current,involved])   //generates a unique stream number for the MTBF/machine pair* In an Object Process Flow you could replace getstream(activity) with: getStream([current,activity]) // generates a unique stream for the instance and activity pair and works for the general process flow too. For a processing time on a processor instead of getstream(current) you could use getstream([current,"Processing"]) and getstream([current,"Setup"]) to generate two seperate sampling streams. The attached library contains an auto-installing user command that overrides getstream() to provide this functionality. The stream values save with the model. getStream-byvariant3.fsl * This implementation does have some limitations since during an experiment it does not communicate back the master model when trying to create new streams. For this reason you'll want to try and have all possible streams set up before running an experiment or avoid the type of actions that dynamically create the requirement for new streams - so that might be keeping all possible fixed resources and task executers, and hiding/removing them from groups rather than destroying them as the OnSet options of the parameters table do currently. Alternatively if you consistently name the dynamically created instances then the MTBF stream expression could be: getStream([current, involved.name]) Update: I've edited this post and library to use getStream (capital 'S') since the override parameter (var thing) doesn't stay in place and eventually causes FlexScript build errors. So with the updated library you'll need to find/replace from the model tree 'getstream' with 'getStream'.
View full article
In version 2018 and on, you can make this chart by dragging the Throughput Per Hour by Type template from the dashboard library. If you install the template (available on the Advanced tab), you will see a Process Flow and a Statistics Collector appear in your toolbox. One of the most common questions from FlexSim users is as follows: How do I make a chart that shows the output every hour? You can make this chart in three steps. Configure the Statistics Collector First, you need a Statistics Collector. Make a new one in the toolbox (click the green plus button, select Statistics, and then select Statistics Collector). On the event listening tab, use the green plus button to add a timer event, and configure as shown here: This timer event will fire every hour (every 3600 seconds) in the model. Notice the shared label, that is storing all members of the Processors group as an array. We will use this label in the next step. Once you have configured the timer, then you need to set up the row mode for this collector. We want one row per processor, and we need to use the Processors label as the row value. Since the Processors label is an array, we will get three rows per timer event, each row corresponding to a processor. Finally, we can add the columns. The three columns are as follows: Time - use the pick list to select Model Date/Time from the Time menu Object - use the pick list to select ID of row value from the IDs menu Output - use the pick list to select Statistic by Object from the Object Statistics menu Use data.rowValue as the object value in the popup If you use the pick options to choose these options, then the storage type and display format options should be set automatically. With these three columns in place, we can watch the table populate. Reset and run the model at high speed. Every model hour, you should see a new set of rows appear, one for each processor in the group. The table will look something like this: Configure the Calculated Table The Statistics Collector table from the previous steps is close to what we want, except that the output value always increases as the model runs. But what about the output for just a single hour? To get that value, we can use a Calculated Table. Make a new calculated table, and give it the following query (in the Query field): SELECT Time, Object, ISNULL(Output - LAG(Output) OVER (PARTITION BY Object), 0) AS OutputPerHour FROM StatisticsCollector1 This query uses SQL window functions. Basically, it says that each row's value should subtract the previous row's value for the object. In addition, if that value is NULL (because it's the first row), then just use a value. If you reset and run the model, so that the collector table has at least a few rows in it, click the Update button to run the query. Notice that the Time and Object columns show numbers. This is because the Calculated Table can't infer the formatting of the column. To set the formatting, use the Display Format Tab. You may also wish the table to update every hour, with the Statistics Collector. Make the Chart Now that our data is correct, we can make a chart. Make a new dashboard, and create a Time Plot chart. Point the chart to the calculated table. Let's use the Time column for the X values, and let's use the OutputPerHour column for the Y values. In addition, make sure to split by the Object column. If the calculated table updates every hour, then running the model should create the chart shown at the beginning of the model. Here is the model used to create this chart (should work in 2017 Update 2 Beta or later; beta must be built on or after August 21, 2017). outputperhourdemo.fsm
View full article
One of the most powerful features of Process Flow is the ability to easily define a Task Sequence. However, many real-life situations require the coordination of multiple workers and machines to do a single task. This article demonstrates one approach you can use with Process Flow to coordinate multiple Task Executers, or in other words, to create a coordinated task sequence. This article talks about an example model (handoff.fsm). It might be easiest to open that model, watch it run, and perhaps read this article with the model open. The Example Scenario Here is a screenshot of the demo model used in this article: Items enter the system on the left. The yellow operator must carry each item to the queue in the middle, and then wait for the purple operator to arrive. Once the two operators are both at the middle queue, then the yellow operator can unload the box, and the purple operator can take it. After this point, the yellow operator is free to load another item from the left queue. The purple operator takes the item, waits for a while, and then puts the item in the sink on the right. The interesting part of this model is the hand off. The yellow operator must wait for the purple operator, and vice versa. This is the synchronization point, and it requires coordination of both operators. The approach used in this model allows you to add more operators to the yellow side, and more to the purple side. But it still maintains that a yellow operator must wait for a purple operator before unloading the box. The Example Model In addition to to the 3D layout shown previously, there are 5 process flows in the example model. The first is a general flow, and defines the logic for each task. The second is a Task Executor flow, and defines the logic for the yellow operator. The third is also a Task Executor flow, and defines the logic for the purple operator. The remaining two flows are synchronization flows, for synchronizing between the other three flows. Synchronizing on a Task The basic approach in this model uses the Synchronize activity. This activity waits for one token from each incoming connector, before it allows any of the tokens to move on. Here is the Yellow Purple Sync flow (a global Sub Flow) from the example model: The flows for the yellow and purple operators each use the Run Sub Flow activity to send a token to this flow, to their respective start activities (you can use the sampler on the Run Sub Flow activity to sample a specific start activity in a sub flow). This is what allows both the yellow and purple operators to wait for each other. However, it is important that the yellow and purple operators are both doing the same task. In this model, there is a token that represents each item that needs to be moved. Both operators get a reference to this task token. The Synchronize activity is set up to partition by that Task token. That means that a yellow and purple operators must both call this sub flow with the same task token, ensuring that each task has its own synchronization. In the example model, this kind of synchronization happens between operators, and it happens between each task and an operator. Basically, the task must wait for the operator to finish that operator's part. The Task Flow A task token is created every time an item enters the first queue. The tasks flow puts that task on both the Yellow and Purple lists. In both cases, the task token does not wait to be pulled, but keeps itself on the list. Then, the task token waits for a yellow operator to finish with it, and then for the purple operator to finish with it. There is a zone in this flow, but its only purpose it to gather statistics for how long the whole task took. The Yellow and Purple Flows These flows are easiest to understand when viewed side by side: Recall that each task is put on both the Yellow and Purple lists at the exact same model time. The yellow operator waits to get a task (at the Get Task activity). Then the operator travels to the first queue, gets the item, and travels to the second queue. At this point, the yellow operator waits. At the same time, the purple operator is also waiting for the task. The purple operator just has to travel to the second queue before waiting for the yellow. Once the yellow operator arrives, the purple operator also has to wait for the yellow operator to unload the box. On the yellow side, once the purple operator arrives, the yellow operator unloads the box, and then synchronizes with the purple operator, allowing the purple operator to load the box. Summary The purpose of this article is to show one method for synchronizing token in separate flows. That method is as follows: Have a token for each task. As each task executor (or fixed resource) needs to synchronize, they each use a Run Sub Flow activity, putting the token in a specific Start activity. The Sub Flow (a global Sub Flow) has a synchronize activity, that requires a token from each participant for that task before releasing the tokens. This is certainly not the only way to create this model. However, there are some advantages: By forcing the task to synchronize, you can gather stats on how long each phase of the task took, as well as how long the complete task took. You can add more yellow or purple operators by copy/paste. They simply follow their own logic Each set of logic is separated; tasks, yellow operators, and purple operators each have their own flows, making each one much simpler. The exact approach used in the example model will not work exactly as it is for each model. However, you can apply the general principles, and adapt them to your own situation.
View full article
Attached is an example model that shows how you can use reversible conveyors for routing/sorting of items. ReversibleRoutingConveyor.fsm Traditionally we've sort of warned against using reversible conveyors for purposes other than accumulation buffers. The main reason I've been hesitant to promote alternative uses is that the routing system for conveyors is, and will continue to be, static. In other words, the path finding algorithm to send an item through a network of conveyors to a destination point does not change when one or more conveyors in the system is reversed. Put another way, "for routing purposes, ..., the conveyor is always assumed to be conveying in its original direction." This naturally makes using reversible conveyors for routing more complex. However, as long as you can still work within those constraints, you can actually get the desired outcome. The attached model does this by 'shortening' the routing decision so that it can always route onto conveyors in their forward direction. The attached model sorts items by color by moving them between two conveyor via a reversible conveyor that conveys in either direction as needed. In order to still work within the 'static routing' rule, I split the reversible conveyor into two separate conveyors that are directed into each other. This way, I can route items onto the reversible section by referencing a conveyor whose primary forward direction always diverts from the line a given item is on. The critical element is that I have to always make sure that when one conveyor is moving forward, the other is reversed, and vice versa. I also have to implement some mutual exclusion, blocking some items so they aren't sent to conveyors in opposite directions. This all is done in the process flow. I honestly don't know how close this example is to a real life situation. We've just received some requests for a reversible conveyor that can do more than just accumulation buffers, and routing/sorting is the main alternate example I can think of. This is one way you can achieve such a result.
View full article
This article explores an example model. In this model, items on downstream lanes are able to reserve dogs so that items on upstream lanes cannot use them: reservedogdemo.fsm About Dogs in FlexSim FlexSim simulates dogs on a power-and-free system in an extremely abstract and minimal way. A dog isn't a persistent entity at all. Instead, FlexSim calculates where dogs would be, given the speed, and when they would interact with items. This has a huge performance benefit. But if your logic needs items to interact with specific dogs, this can pose a problem: how do you interact with such an abstract entity? The Catch Condition The only time you can "see" a dog in FlexSim is during the Conveyor's Catch Condition: https://docs.flexsim.com/en/23.1/Reference/PropertiesPanels/ConveyorPanels/ConveyorBehavior/ConveyorBehavior.html#powerAndFree The catch condition fires when a dog passes by an item. If the catch condition returns a 1, the item catches the dog and transfers to the power and free conveyor. If the catch condition returns a 0, the item does not catch the dog. During the catch condition (and only during a catch condition), you can learn many things about a dog: ID - each dog has an ID. The ID is derived from the length of the conveyor and by the distance the conveyor has travelled. If a conveyor is 26 dogs long, the dogs will have IDs 1 through 26. Location - Since an item is trying to catch the given dog, you can derive the dog's location from the items location. Speed - The conveyor that owns the dog is "current" in the catch condition. So you can get the speed of the conveyor at that point. We'll use all these pieces of information in a moment. Creating Tokens to Represent Dogs The first real insight into this model is to make a dummy item. The purpose of this dummy item is to cause the Catch Condition to fire. It never gets on the conveyor. But when the catch condition fires, it makes a token that represents the dog. In this example, that item has a label called "DogFinder" Here is the relevant code from the catch condition: if (item.DogFinder?) { Object pe = current.DogPE; if (pe.stats.state(1).value == PE_STATE_BLOCKED) { return 0; } double dist = current.MaxDogDist; double speed = current.targetSpeed; double duration = dist / speed; if (!item.labels["DistAlong"]) { item.DistAlong = Vec3(item.getLocation(1, 0, 0).x, 0, 0).project(item.up, current).x; } Token token = Token.create(0, current.DogHandler); token.DistAlong = item.DistAlong; token.Conveyor = current.as(treenode); token.Duration = duration; token.DogNum = dogNum; token.Speed = speed; token.DetectTime = Model.time; token.release(1); return 0; } There's a lot going on in this code: This logic only fires for the fake dog finder item If the photo eye just upstream from the dog is blocked, that means there is an item, and this dog is not available. Return here if that's the case. Figure out how long this dog will last (the duration), assuming the conveyor runs at the same speed. In this model, there's a label on the conveyor called MaxDogDist. This is the distance from the PE to the end of the conveyor, minus 2 meters. If this is the first dog ever, calculate the position of the dog, given the position of the item. Store that on a label. Create a token with all kinds of labels. We'll need all this information to estimate where the dog is later, and to estimate how far it is from other items. Pushing Dog Tokens to a List Once the token is made, we need to push it to a list, so that items can pull them. If all your items are the same size, you can just push the token to a list directly. In this model, however, there are larger items that require two dogs. So there's a batch activity first. The dummy item is far enough back that it can detect two dogs and still push the first dog to the list in time for the first lane. So it holds the dog back in a Batch activity until one of two things happen: Either the next dog token appears, completing the batch. Or the max wait timer on the batch expires, indicating that the next dog is not available. Otherwise, there would have been a token. This duration is based on the conveyor's speed and dog interval. If the batch is complete, the first dog in the batch can be marked as a "double", meaning the dog behind it is also available. Once the flow has determined whether the dog is a single or double, it then pushes it to the list. Creating the DistToDog Field When pulling the dog from the list, an item needs to know the position of the dog relative to the item. Is it 0.3 meters upstream? Or is it 2 meters downstream? When we query the set of dogs, we need to filter out downstream dogs and order by upstream dogs, to reserve the closest one: WHERE DistToDog >= 0 ORDER BY DistToDog Here, DistToDog is positive if the dog is upstream, and negative if the dog is downstream. The code for this field is as follows: /**Custom Code*/ Variant value = param(1); Variant puller = param(2); treenode entry = param(3); double pushTime = param(4); double distAlong = Vec3(puller.getLocation(1, 0, 0).x, 0, 0).project(puller.up, value.Conveyor).x; double dt = Model.time - value.DetectTime; double dx = value.Speed * dt; double dogDistAlong = value.DistAlong + dx; return distAlong - dogDistAlong; This code assumes that the item waiting to merge is the puller. So we calculate the item's "dist along" the main conveyor. Then we estimate the location of the dog since the DogFinder item created the token. Then we can find the difference between the item's position and the dog's position. Pulling Dogs from the List Each incoming lane has a Decision Point. The main process flow creates a token when an item arrives there. At a high level, this token just needs to do something simple: pull an available downstream token. If all the items are the same size, it's that simple. But this example is more complicated! If the item is large, we also need to pull the upstream dog behind the dog we got, so that no other item can get that dog. And it gets even more complicated! It can happen that an item acquires the dog after a double dog. In that case, we need to mark the downstream dog as "not double", so that big items won't try to get it. So most of the logic in the ConveyorLogic flow is handling that case. Using the Dog Finally, the item must be assigned to that dog. The ConveyorLogic flow sets the DogNum label on the item. Then, the catch condition checks to see if the dog matches the item's DogNum. Upstream Items The final piece of this model is allowing upstream items to catch a dog on this conveyor. This model adds a special label to those items called "ForceCatch". The catch condition always returns true for those items.
View full article
If you've ever tried to nest groups of objects inside a hierarchy of planes, you may find the drawing of the planes suboptimal and lacking information: Using the container (modified plane) in the attached user library you can represent the container with just an outline. A settings dashboard is installed with the library along with some user commands and global variables. Corner prisms show the nesting layers under the prism: The option 'Use the container center' allows you to use it as either a plane as before or, when unselected, a bordered frame where dropping an object or clicking and dragging within the borders will behave as though you are dropping onto or clicking/dragging the model floor. You can also choose to hide the containers entirely for the cleanest visuals. I hope this will encourage users to use containers more, since when coupled with Templates and Object Process Flows they can increase the scaleability and make your developed assets more manageable. ( In those cases the container becomes the member instance of the process flow or template master and references to its components are made through pointer labels on the container rather than names which you may want to alter for reporting purposes. The pointer labels are updated automatically when creating a new instance of the container.) ContainerMarkers_v1.fsl If you want planes you already have in your model to adopt this style just add this to their draw code: return containerdraw(view,current);
View full article
This article describes an example of Reinforcement Learning being used to solve scheduling problems. See the model and python files in the attached zip file. SchedulingRL.zip Problem Description This model represents a generic sheet metal processing plant. There are four machines in series. Each job requires time on all four machines. Jobs come in batches of 10. A poor sequence of jobs will cause blocking between items, lowering throughput. If the time between batches is long, such as a shift or a day, you could use the optimizer to determine the best sequence. If the time between batches is short, however, the optimizer may not be feasible. For real sequencing problems, the time to find a good sequence can be anywhere from 5 minutes to an hour, or even longer. This makes it impractical for high-velocity situations. The attached model requests a decision every time the first machine in the series is available. The only action is an index for the Nth available job. So the decision can be interpreted as "which job should I do next?" Solution The general solution is to use reinforcement learning. However, this problem required customized python scripts: The model uses custom parameters for observations. This allows arbitrary values for observations. The model uses a custom observation space. The observations include a table of the required times at each station for the remaining jobs. They also include an array of the in-progress jobs and their predicted remaining times. By using a Dict space, the python scripts can combine all the observations into a single space. The model uses an Action Mask. An Action Mask is a binary array with one value per value of the action. This tells the RL algorithm about invalid options. The python scripts require the sb3-contrib package. Use pip install sb3-contrib to install it. Results After training for 500k time-steps, the agent learns to choose jobs moderately well. If you run the inference script, you can use the experimenter to compare a random policy to a trained agent:
View full article
What is ODBC? FlexSim can use ODBC, and ODBC can connect to many different kinds of data sources, including files (like Excel) and databases. It allows you to use SQL queries to get data from any supported data source. ODBC uses a driver determine how to get data from a given data source. A driver is translator between ODBC and whatever data source you are querying. For example, there is a driver for Excel files. If you have that driver on your system, you could use ODBC to query data from Excel files. As another example, there is a driver for SAP HANA, the database for SAP. If you have that driver, then ODBC will know how to talk to SAP. This means that if the correct driver is installed, and is working correctly, that you can use FlexSim to query any ODBC data source. Using the Database Connector FlexSim has a tool called the Database Connector. You can use it to configure a connection to a database. This article uses that tool. For more information, see https://docs.flexsim.com/en/20.2/Reference/Tools/DatabaseConnectors/ As an example, let's say that you want to connect to Excel using ODBC. Assuming you have the Excel ODBC driver installed on your system, you can configure a Database Connector to look like this: Note that you must specify the full connection string. In the connection string, you can see that the driver and the file are specified. Then you can query the data in a given sheet with a query like this: SELECT * FROM [Sheet1$] You can find more information on querying Excel files at websites like this: https://querysurge.zendesk.com/hc/en-us/articles/205766136-Writing-SQL-Queries-against-Excel-files-Excel-SQL- If you use Office 365, you may need to install the Microsoft Access Database Engine 2016 Redistributable. This includes newer drivers for Excel and Access Be sure to install it with the /quiet flag on the command line. Instructions can be found in this troubleshooting guide: https://docs.microsoft.com/en-us/office/troubleshoot/access/cannot-use-odbc-or-oledb Note that FlexSim has an Excel tool, which is usually easier to use. This tool requires Excel to be installed, but does not require the ODBC driver for Excel to be installed on your computer. For more information, see https://docs.flexsim.com/en/20.2/Reference/Tools/ExcelInterface/. Excel makes a good example because most people have it, and it's easy to get the driver for it. Connection Strings Different kinds of connections require different connection strings. The following list has an example connection string for a few data sources: Excel Driver={Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)};DBQ=Path\To\Excel\File.xlsx Access Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=Path\To\Access.accdb SQLite Driver={SQLite3 ODBC Driver};Database=Path\to\sqlite.db SAP HANA DRIVER={HDBODBC};UID=myUser;PWD=myPassword;SERVERNODE=myServer:30015 Checking for Drivers Note that each connection string specifies a driver, and then additional information. The additional information depends on the driver you are using. In order to determine which drivers are on your system, you need to open the ODBC Data Sources Administrator window. To do that, hit the windows key, and then type ODBC. Then choose the option called ODBC Data Sources (64 bit). If you are running 32-bit FlexSim, open the 32 bit version. Go to the Drivers tab. Here is what my Drivers tab looks like: You can see I have drivers for Access, Excel, SQL Server, and SQLite3. I don't have drivers for SAP HANA. If I did, you'd see a driver named HDBODBC in the list. To access that kind of database, I'd need to install that driver. You can also see that the name of the driver used the the connection string must match exactly to what is shown here. Other Info You may see an exception appear when you test the connection to your database. If the view shows that the connection succeeded, then it has succeeded. The exception happens because FlexSim tries to get a list of tables from the database that it's querying. FlexSim may not guess correctly for your particular data source. That exception can be safely ignored. If you used the old db() commands in the past, consider upgrading to using the Database Connector. It will be orders of magnitude faster to read in an entire table.
View full article
Tokens and Flow Items can be very difficult to add to a chart. This is true because they don't exist on Reset, making them difficult to select. This article shows how you can use a Process Flow to allow a Statistics Collector to record a token's changing label value, and also to chart that value over time: The model for this example is attached (graphlabeldata.fsm). It is a very simple model: The Scheduled Source creates three tokens, each of which create a label called data. This label is created by choosing "Add Tracked Variable" for the value, which opens this dialog box: The reason we want the label to be a Tracked Variable is that Tracked Variables emit an OnChange event. We want to listen to that event. If you use the time interval collection method, discussed later in this article, you don't need to make the label a Tracked Variable. Each token then goes through a loop, where it waits, and then updates the value. This is meant to represent a much more complicated model, where the token travels through many activities, any of which could change its label value. For this example, the model randomly changes the value on the label. Now that we have a token and a label whose value is changing as the model runs, we can work on making a chart. We want to eventually make a Statistics Collector, but Statistics Collectors can only listen to events of objects that exist after the model has been reset. Tokens and FlowItems (along with their labels) are destroyed on reset, and so we can't listen directly to them. However, some Process Flow activities can listen to events on tokens and flowitems, and the Statistics Collector can listen to those events. For that reason, we make a second Process Flow: This flow has an Event-Triggered Source, which listens for tokens to leave the "Init Tracked Variable" activity in the first flow. When that happens, the source creates a token, and that token immediately gets a reference to the label node (note that this is different than the value of the label node). Next, the new token goes to a Start activity. The Start activity called "Log Change." This activity is just a placeholder. While you could technically live without this activity, it makes things a little clearer, as we will discuss later. Other than providing OnEntry and OnExit events, the Start activity has no internal logic whatsoever. After passing through the Log Change activity, the new token waits for the label value to change: In order to listen to this event, you can first sample a Tracked Variable in the Toolbox. This provides the OnChange event. Then you can update the Object field to the code shown above. Notice that every time this event happens, the token simply passes through the Log Change activity, and then resumes listening. When the original label value changes, it emits an OnChange event. When that event fires, the token listening to that even travels through the Log Change activity, which emits OnEntry and OnExit events. We can use these events in the Statistics Collector. The key to this technique is that we used Process Flow, which is good at listening to token and flowitem events, to generate Activity events, which can be used in the Statistics Collector. In the attached model, the first Statistics Collector is configured like this: It simply listens to the On Entry of the Log Change activity. The columns are defined as follows: The first two columns are simpler; the Time column uses the Model Date/Time option: The second column gets the ID of the token as an integer: The third column gets the current value of the Data label: Now that the Statistics Collector is set up, we can configure the chart to use this collector, and split by the Token ID. The process to record the label value every interval (rather than on every change) is very similar. The downside is that the data is less granular, but the upside is that a label doesn't have to be a Tracked Variable to be charted. The example model simply uses a Split activity to copy the data from the Event Triggered Source, and sends it to a similar listening loop: Instead of waiting for the value to change, the second token waits for a fixed time interval. A similar Statistics Collector will allow you to create the following chart: This approach works for every token created by the scheduled source. No matter how many tokens you create, each will show up on the chart:
View full article
Top Contributors