FlexSim Knowledge Base
Announcements, articles, and guides to help you take your simulations to the next level.
Sort by:
If you have a contiguous conveyor network you can just route items using Conveyor.sendItem() and FlexSim will guide the item to the destination, passing through inline and side transfers as required for the shortest path. If between some conveyors you use exit and entry transfers, perhaps to easily add elevators and shuttles as transports between them - then you'll normally be faced with adding logic to figure out which exit transfer to go to and which port to take from that transfer - and in a large model that logic can be extensive and hard to maintain. The attached model and library provides commands for automated routing through multiple conveyor sub-sections connected through exit/entry transfers, to conveyor points and to connected fixed resources. This means that you may no longer have to write sendTo code with case statements on each exitTransfer to determine which port an item should exit through – nor possibly need to have decision points with case logic to decide the destination for Conveyor.sendItem(). In the example model three sources create items with random destinations which are routed through the conveyor system, transfers and port automatically to arrive at the correct destinations – some of the ports having transport to perform the move. To make this work in any model you should load the user library which will auto-install a set of user commands and a General Process Flow. The first step is to run the user command ‘createAllTravelMaps()’ which will calculate all the reachable destinations (decision points, stations, pes, attached fixed resources and transfers) from all the conveyor points and entry/exit transfers) along with estimates of the conveytime (from the conveyor class). This information consolidated to create the shortest routes and is stored in a label ‘travelMap’ on each decision point, station, pe and transfer. To make use of the travelMap data there are three additional user commands supplied that are intended to be used directly by the modeller: getNextConveyPoint(thispoint, destination) – returns the next point to send an item to from this point in order to ultimately reach the destination. getConveyExitPort(exitTransfer, destination) – returns the port through which an item should exit the exitTransfer in order to reach the destination. getConveyItemsNextConveyPoint(item, destination) – returns the next point to which an item should travel to reach the destination from its current position on a conveyor. The simple process flow in the example and library is set to listen to the Group members of EntryTransfers and ExitTransfers in order to lookup the ‘destination’ label and either sends the item to the next point or in the case of the exit transfers, overrides the sendTo port with the value from the map. I’ve added some documentation to the user commands which you can access easily via the command helper: ConveyorTravelMaps_0.3.fsl ConveyorTravelMapExample.fsm You may find createTravelMaps() takes a while which is why a progress bar has been added. You may not need all points to be evaluated exhaustively so the option to pass in a flag indicating to only start evaluation from Entry Transfers is given, which will create somewhat incomplete maps for intermediate points. A future refinement would be to account for transport time from exit transfers either by recording the times or providing port list with the expected times. Clearly if you make changes to your transfer positions or conveyor layout you should rerun createAllTravelMaps.
View full article
En este video aprenderán a crear diferentes layouts en un modelo de simulación de FlexSim usando la herramienta Model Layout. Para más videos tutoriales pueden suscribirse al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips.
View full article
En este video aprenderán a representar el uso de un montacargas con un operario. Este video está basado en esta pregunta del Foro. Para más videos tutoriales pueden suscribirse al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de Retos FlexSim.
View full article
This article describes an example of Reinforcement Learning being used to solve scheduling problems. See the model and python files in the attached zip file. SchedulingRL.zip Problem Description This model represents a generic sheet metal processing plant. There are four machines in series. Each job requires time on all four machines. Jobs come in batches of 10. A poor sequence of jobs will cause blocking between items, lowering throughput. If the time between batches is long, such as a shift or a day, you could use the optimizer to determine the best sequence. If the time between batches is short, however, the optimizer may not be feasible. For real sequencing problems, the time to find a good sequence can be anywhere from 5 minutes to an hour, or even longer. This makes it impractical for high-velocity situations. The attached model requests a decision every time the first machine in the series is available. The only action is an index for the Nth available job. So the decision can be interpreted as "which job should I do next?" Solution The general solution is to use reinforcement learning. However, this problem required customized python scripts: The model uses custom parameters for observations. This allows arbitrary values for observations. The model uses a custom observation space. The observations include a table of the required times at each station for the remaining jobs. They also include an array of the in-progress jobs and their predicted remaining times. By using a Dict space, the python scripts can combine all the observations into a single space. The model uses an Action Mask. An Action Mask is a binary array with one value per value of the action. This tells the RL algorithm about invalid options. The python scripts require the sb3-contrib package. Use pip install sb3-contrib to install it. Results After training for 500k time-steps, the agent learns to choose jobs moderately well. If you run the inference script, you can use the experimenter to compare a random policy to a trained agent:
View full article
In the attached model we use a Timetable and two MTBF/MTTR objects to define Schedule Loss, Availability Loss (breakdowns) and an element of Performance loss due to short stops (state Down). The processor sends 'bad' items to port 2 based on the send to percentage which account for QualityLosses. The processor's 'best' processing time per part (5 seconds) is stored as a label, while the processing time itself is a triangular distribution with the minumum as 5 seconds - so it also contributes to performance loss. When the Type of the item changes a setup time occurs which is the final contributor to performance loss. Two state profiles were added to the processor - one to track production time and another for availability. An object process flow on the processor detects production profile state changes (between On and off shift) and regular Flexsim state changes and determines the availability state that should prevail. A user command getOEEstat is used to access the values which it calculates on demand and stores in a label on the processor called statsMap. The syntax for this command is: getOEEstat(myMachine,"OEE") The list of stats: "ScheduleLoss","AvailabilityLoss","PerformanceLoss","QualityLoss","IdealProdTime","AvailabilityRatio", "QualityRatio","PerformanceRatio", "IdealProdTime", "RunTime", "OEE", "TEE". A group was used to indicate which objects have their OEE tracked, and a stats Collector reads the group members and adds rows at reset. Finally Performance Measures were added for the stats for processor 1. Processor_OEE_2.fsm 2023-08-22 Update: Added 'TEE' stat.
View full article
A narrated video demonstration of the FlexSim Healthcare Tutorial described in the FlexSim 2020 User Manual has been released! Here is a link to the written documentation: https://docs.flexsim.com/en/20.0/Tutorials/FlexSimHC/OverviewFlexSimHC/ Here's a link to the video: https://vimeo.com/394012280
View full article
Attached is an example model that shows how you can use reversible conveyors for routing/sorting of items. ReversibleRoutingConveyor.fsm Traditionally we've sort of warned against using reversible conveyors for purposes other than accumulation buffers. The main reason I've been hesitant to promote alternative uses is that the routing system for conveyors is, and will continue to be, static. In other words, the path finding algorithm to send an item through a network of conveyors to a destination point does not change when one or more conveyors in the system is reversed. Put another way, "for routing purposes, ..., the conveyor is always assumed to be conveying in its original direction." This naturally makes using reversible conveyors for routing more complex. However, as long as you can still work within those constraints, you can actually get the desired outcome. The attached model does this by 'shortening' the routing decision so that it can always route onto conveyors in their forward direction. The attached model sorts items by color by moving them between two conveyor via a reversible conveyor that conveys in either direction as needed. In order to still work within the 'static routing' rule, I split the reversible conveyor into two separate conveyors that are directed into each other. This way, I can route items onto the reversible section by referencing a conveyor whose primary forward direction always diverts from the line a given item is on. The critical element is that I have to always make sure that when one conveyor is moving forward, the other is reversed, and vice versa. I also have to implement some mutual exclusion, blocking some items so they aren't sent to conveyors in opposite directions. This all is done in the process flow. I honestly don't know how close this example is to a real life situation. We've just received some requests for a reversible conveyor that can do more than just accumulation buffers, and routing/sorting is the main alternate example I can think of. This is one way you can achieve such a result.
View full article
The attached model contains functionality to depict the item flow as a 3D map using a FlowMapper3D Object (cylinder) and an associated Object Process Flow. Additionally a 'kpi' label on the object gives an indication of layout performance to which you can link and observe as you interact/experiment on the layout. To set this up in your model you'll need to add a Group of objects whose entry events will be used by the mapper - calling that Group "FlowMapperObjects". Then you'll need to add a ColorPalette called "HeatPalette". Finally you'll want to copy the FlowMapper3D object and the FlowMapperProcess to your model. Note that there is a boolean label 'showPercents' on the FlowMapper3D object to tell it whether to show percentage text or the number of flowitems for each location pair. 3DFlowMapper.fsm
View full article
FlexSim 2022 introduced a Reinforcement Learning tool that enables you to configure your model to be used as an environment for reinforcement learning algorithms. That tool makes connecting to FlexSim from a reinforcement learning algorithm easier, but that tool is not absolutely necessary for this type of connectivity. The same socket communication protocols that are used by that tool are available generally in FlexScript. Attached (ChangeoverTimesRL_V22.0.fsm) is the FlexSim 2022 model that you build as part of the Using Reinforcement Learning documentation that walks you through the process of building and preparing a FlexSim model for reinforcement learning, training an agent within that model environment, evaluating the performance of the trained reinforcement learning model, and using that trained model in a real production environment. Also attached (ChangeoverTimesRL_V6.0.fsm) is a model built with FlexSim 6.0.2 from 2012 that does the exact same thing, but with custom FlexScript user commands instead of the Reinforcement Learning tool. You can use this model with the example python scripts and FlexSim 6.0.2 in the same way that you can use the other model with those same scripts in FlexSim 2022. I'm providing this FlexSim 6 model as an example that demonstrates how you can communicate between FlexSim and other programs. The Reinforcement Learning tool certainly makes this type of communication easier and simpler, with a nice UI for specifying RL-specific parameters, but the fundamental principles of how this works have been available in FlexSim for many years using FlexScript. Hopefully this example can help teach and inspire those who wish to control or communicate with FlexSim from external sources for purposes other than just reinforcement learning. FlexSim is flexible, and the possibilities are endless.
View full article
What is ODBC? FlexSim can use ODBC, and ODBC can connect to many different kinds of data sources, including files (like Excel) and databases. It allows you to use SQL queries to get data from any supported data source. ODBC uses a driver determine how to get data from a given data source. A driver is translator between ODBC and whatever data source you are querying. For example, there is a driver for Excel files. If you have that driver on your system, you could use ODBC to query data from Excel files. As another example, there is a driver for SAP HANA, the database for SAP. If you have that driver, then ODBC will know how to talk to SAP. This means that if the correct driver is installed, and is working correctly, that you can use FlexSim to query any ODBC data source. Using the Database Connector FlexSim has a tool called the Database Connector. You can use it to configure a connection to a database. This article uses that tool. For more information, see https://docs.flexsim.com/en/20.2/Reference/Tools/DatabaseConnectors/ As an example, let's say that you want to connect to Excel using ODBC. Assuming you have the Excel ODBC driver installed on your system, you can configure a Database Connector to look like this: Note that you must specify the full connection string. In the connection string, you can see that the driver and the file are specified. Then you can query the data in a given sheet with a query like this: SELECT * FROM [Sheet1$] You can find more information on querying Excel files at websites like this: https://querysurge.zendesk.com/hc/en-us/articles/205766136-Writing-SQL-Queries-against-Excel-files-Excel-SQL- If you use Office 365, you may need to install the Microsoft Access Database Engine 2016 Redistributable. This includes newer drivers for Excel and Access Be sure to install it with the /quiet flag on the command line. Instructions can be found in this troubleshooting guide: https://docs.microsoft.com/en-us/office/troubleshoot/access/cannot-use-odbc-or-oledb Note that FlexSim has an Excel tool, which is usually easier to use. This tool requires Excel to be installed, but does not require the ODBC driver for Excel to be installed on your computer. For more information, see https://docs.flexsim.com/en/20.2/Reference/Tools/ExcelInterface/. Excel makes a good example because most people have it, and it's easy to get the driver for it. Connection Strings Different kinds of connections require different connection strings. The following list has an example connection string for a few data sources: Excel Driver={Microsoft Excel Driver (*.xls, *.xlsx, *.xlsm, *.xlsb)};DBQ=Path\To\Excel\File.xlsx Access Driver={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=Path\To\Access.accdb SQLite Driver={SQLite3 ODBC Driver};Database=Path\to\sqlite.db SAP HANA DRIVER={HDBODBC};UID=myUser;PWD=myPassword;SERVERNODE=myServer:30015 Checking for Drivers Note that each connection string specifies a driver, and then additional information. The additional information depends on the driver you are using. In order to determine which drivers are on your system, you need to open the ODBC Data Sources Administrator window. To do that, hit the windows key, and then type ODBC. Then choose the option called ODBC Data Sources (64 bit). If you are running 32-bit FlexSim, open the 32 bit version. Go to the Drivers tab. Here is what my Drivers tab looks like: You can see I have drivers for Access, Excel, SQL Server, and SQLite3. I don't have drivers for SAP HANA. If I did, you'd see a driver named HDBODBC in the list. To access that kind of database, I'd need to install that driver. You can also see that the name of the driver used the the connection string must match exactly to what is shown here. Other Info You may see an exception appear when you test the connection to your database. If the view shows that the connection succeeded, then it has succeeded. The exception happens because FlexSim tries to get a list of tables from the database that it's querying. FlexSim may not guess correctly for your particular data source. That exception can be safely ignored. If you used the old db() commands in the past, consider upgrading to using the Database Connector. It will be orders of magnitude faster to read in an entire table.
View full article
In version 2018 and forward, you can make this chart using a Chart Template. You can simply drag and drop the chart from the dashboard library. This article may help you understand how the chart template works. You can use the Install button on the chart template to view the Process Flow, Statistics Collector, and Calculated Table that make the chart. This article reviews how to use the Zone, along with the Statistics Collector, to create a bar chart of the current work-in-progress (WIP) by item type. The method scales to as many types as you need (this example uses 30 types), and can easily adapted to text data, like SKU. An example model ( zonecontentdemo.fsm) demonstrates this method. Creating the Process Flow To create this chart, we first need to gather the data for this chart. In this case, it is easiest to build off the capabilities of the Zone. In particular, we will use Zone Partitions to categorize all of our object. After we set up the Zone, we'll use a Statistics Collector to gather the data that we need. Create a new General Process Flow. You can have as many General Process Flow objects as you want, so let's one that just deals with gathering statistics. This way, gathering statistics will not interfere with the logic in our model. The process flow should look something like this: Here's how it works. The Listen to Entry is configured to listen to a group of objects. In this case, the group contains all the sources in the model, and it's listening to the OnExit of the sources. However, it could be OnEntry or OnExit of any group of 3D objects. If you want to split the statistics by Type or SKU, then any flowitem that reaches the entry group already has the appropriate labels. In this example model, when a flowitem leaves any source, a token gets created. The token makes a label called Item that stores a reference to the created item, as shown in the following picture. The next step is to link the flowitem with the token that represents it. The Link Token to Item is configured like this: Now, the Item has a label that links back to the token. The token then enters a zone. The Zone is partitioned by type: At last, the token comes to a decide activity. The decide is configured not to release the token. The token will be released by the second part of the flow. Once the token is released, it exits the zone, and goes to a sink. The second part of the flow also has an event-triggered source, that is configured to listen to all the sinks in the model. Again, the entry objects and exit objects are arbitrary; you can gather data for the entire model, or for just a small section of the model, using this method. The event triggered source also caches off the item in a label. At this point, we need to release the token that was created when items entered the system. To do this, the Release Token activity is configured as seen here: The token created on the exit side has a reference to the item, which has a reference back to the token created on the entry side. We release this token to 1, which means connector 1. Note: We could have used a wait for event activity in the zone, and then used the match label option to wait for the correct item to leave the system. However, this method is much, much faster, especially as the number of tokens grows. Creating the Bar Chart Statistics Collector The next step is to create a statistics collector that gathers data appropriate for a bar chart. Note that this method will grow the number of rows dynamically, so that it won't matter how many types (or SKUs) your model has; you will still get one bar per type/SKU. In order to make the number of rows dynamic, we need to listen to the OnEntry and the OnExit of the zone activity: Notice the shared label on this collector. Because this label is shared, both the OnEntry and OnExit events will create this label on the data object. The value of this label is the item's type. Next, we move to the Data Recording tab. Set the Row Mode to Unique Row Values, and set the row value to Partition. This means that whenever an event fires, the statistics collector will look at the partition label on the data object. If the value is new to the statistics collector, the collector will make a new row for this value. If not, then the collector will use the row that is already present. Finally, we need to make our columns. We only need two columns: one for the Partition, and one for the Content of that partition. Both of these columns can use the Integer storage type, and raw display format. However, if your partition value was text-based, like an SKU, you should use the String storage type. The value for Partition is just the data object's Partition label: Notice that the Update option is set to When Row is Added. This way, the statistics collector knows that this value will not change, and that it's available at the time the row is created. The other column is a little harder, because we need to use the getstat command: The getstat command arguments depend on the stat you are trying to get. In this case, we are asking the zone (current, the event node) for the Partition Content statistic. We want the current value. Since this is a process flow activity, we pass in the instance as the next argument. Finally, we pass in which partition we want to get the data from, the row value. In this case, we could have identically passed in data.Partition. Also, notice that this column is updated by event dependency. To make sure this does what we want, we need to edit the event/column dependency table. We want the Content column to be updated when items enter and exit, so it should look like this: Now, open the table for the statistics collector. You should see two columns. When you run the model, rows will be added as items of different types are encountered. The table will look something like this: This screenshot came from early in the model, before all 30 types of item has been encountered, so it doesn't have 30 rows yet. Making the Bar Chart This is the easiest part. Create a new dashboard, and add a new bar chart. Point the chart at the statistics collector. For the Bar Title option, choose the Partition column. Be sure to include the Content column. Also, make sure that the "Show Percentages" checkbox on the Settings tab is cleared. The settings should look like this: The resulting chart looks something like the following image. You can set the color on the Colors tab. Ordering the Data Because the rows of this table are created dynamically, the order of the rows will likely change run to run. To force an ordering, you can use a calculated table. Since the number of rows on this table don't grow indefinitely, and the number is relatively small, it's okay to set the Update Mode on the calculated Table to always. Here's what the properties of that calculated table look like: We simply select all columns from the target collector (CurrentContent, in this case) and order it by the Partition column. That yields an ordered bar chart: Example and Additional Charts The attached example model demonstrates this method, as well as how to create a WIP By Type vs Time chart: Happy data collecting! zonecontentdemo.fsm
View full article
Como criar animações customizadas em máquinas ou equipamentos no FlexSim é o que demonstramos nesse rápido vídeo. Customizamos uma animação no objeto processor, criando a movimentação real de uma maquina envolvedora de filme stretch. Veja o tutorial completo de como executar essa tarefa passo a passo, acessando o Canal Youtube da FlexSim Brasil. Ainda neste artigo, estou anexando uma pasta chamada 'Wrapper' com os arquivos em 3D para os interessados que quiserem criar as customizações acessarem e usarem os arquivos. wrapper.raranimationcreator.fsm
View full article
FlexSim's Webserver is a query-driven manager and communication interface for FlexSim. It allows you to run FlexSim models through a web browser like Google Chrome, FireFox, Internet Explorer, etc. Since the FlexSim Web Server is a basic service to allow FlexSim to be served to a browser, you may decide you want a way to proxy to this service through a full service web server that you can control security and authentication through. This guide will walk you through proxying to the FlexSim Web Server through Nginx web server. Install the FlexSim Web Server Program Download and install the FlexSim Web Server from https://account.flexsim.com Edit C:\Program Files (x86)\FlexSim Web Server\flexsim webserver configuration.txt Change the port from 80 to 8080 Start the FlexSim Web Server by double clicking flexsimserver.bat Test the server by going to http://127.0.01:8080 It should look like this: Install Nginx Reverse Proxy From a browser, visit http://nginx.org/en/download.html Download latest stable release for Windows Extract the downloaded nginx-<version>.zip Rename the unzipped nginx-<version> folder to nginx Copy the nginx folder to C:\ Double click the C:\nginx\nginx.exe file to launch Nginx Test Nginx by going to http://127.0.0.1 It should look like this: Configure Nginx to proxy to the FlexSim Web Server Open C:\nginx\conf\nginx.conf in a text editor Find the section that says: location / {    root html;    index index.html index.htm; } Edit out the root and index directives and add a proxy_pass directive so it appears like this: location / {       proxy_pass http://127.0.0.1:8080;    #root html;    #index index.html index.htm; } Save the nginx.conf file Reload Nginx to Apply the Changes Open a command line window by pressing Windows+R to open "Run" box. Type "cmd" and then click "OK" From the command line windows, type the following to change to the nginx directory: C:\nginx>cd C:\nginx and press enter Now, type the following to reload Nginx: C:\nginx>nginx -s reload Test the FlexSim Web Server Being Proxied by Nginx From a browser window again go to http://127.0.0.1 You should now see the FlexSim Web Server interface proxied through Nginx Now that you have the FlexSim Web Server proxied through Nginx, you may decide you want to configure Nginx to handle security, authentication and customization. Since this is out of the scope of this guide, you can find details on the Internet that can guide you to setting these customizations up. A few resources you may consider: https://forum.nginx.org https://stackoverflow.com
View full article
In this article, you will learn how to export data from individual replications to an external database. This is an advanced tutorial, and assumes some exposure to databases and SQL. It also assumes that you are familiar with the Experimenter and/or Optimizer feature in FlexSim. This tutorial uses PostgreSQL, but should be compatible with the most popular SQL database engines. Model Description Let's assume you have a model with at least one Statistics Collector, as well as an experiment ready to go. The model used in this example is fairly simple: As the model runs, a Statistics Collector fills in a table. The table makes a new row for every item that goes into the Sink, recording the time, and the Type label of the item: For the experiment, we don't have any variables, but we do have one Performance Measure: Input of the Sink. Of course, an actual model would have many Variables, Scenarios, and Performance Measures, but this model leaves out those details. For reasons which come up later, this model also has two global variables, called g_scenario and g_replication. We will use these during the experiment phase. Connecting to the Database To connect to a database, you can use the Database Connector tool. Each Database Connector handles the connection to a single database. If you need to connect to more than one database, you will need more than one Database Connector. For this model, I have added a Database Connector, and configured the connection tab as follows: This configuration allows me to connect to a PostgreSQL database called "flexsim_test" running on my computer at port 9001. You can test the connection by clicking the "Test Connection" button. The test attempts to connect, as well as query the list of tables found in the database. Setting Up the Export Next, take a look at the Export tab: This tab specifies that we are exporting data from StatisticsCollector1 to the table called experiment_results in the database (this table should exist before you set up the export). The Append to Table box means that when the export occurs, the data will be added to the table. Otherwise, the data would be cleared from the target table. The other interesting thing here is that we are exporting more columns than the Statistics Collector has, namely the Scenario and Replication number. However, the expression in the From FlexSim Column column must be valid FlexQL (FlexSim's internal SQL language). Wrapping the values in Math.floor() leaves the values unchanged, and works well with FlexQL. Setting up the Experimenter Finally, let's take a look at just a little bit of code that makes the Experimenter dump data to a database. There is code in Start of Experiment, Start of Replication, and End of Replication: In the Start of Experiment, we need to clear the table. The code in this trigger connects to the database, runs the necessary query, and the closes the connection to the database. In the Start of Replication trigger, the code simply copies the replication and scenario values into the global variables we created for this purpose. In the End of Replication trigger, the code uses function_s to call "exportAll" on the database connector that we created. This uses the settings on the Export tab to dump the data from StatisticsCollector1 to the database table, including the replication and scenario columns. These same triggers run during an optimization, so the same logic will apply. Run the Experiment Finally, you can run the experiment. When each replication ends, the Statistics Collector data will be exported to the database. Databases can handle many connections simultaneously. This is important, because the child processes that run replications will all open individual connections to the database at the end of each replication. This leads to many concurrent connections, which most databases are designed to handle. But how can we tell that it worked? If you connect to the database with another tool, you can see the result table. Note that there are 1700 rows, and that data from Replication 5 is included. Additionally, you could use the Import tab to import all the data, or summary of the data data, into FlexSim. Conclusion Storing data in a database during an experiment is not difficult to do. There are many excellent tools for analyzing and visualizing database tables, which you could then use to further explore and understand your system. postgresqlexperimentdemo.fsm
View full article
En este video aprenderán a representar fallas y/o averías en un modelo de simulación de FlexSim utilizando la tabla de MBTF/MTTR Para más videos tutoriales pueden suscribirse al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips.
View full article
Sometimes data exists in Google Sheets that needs to be brought in to FlexSim. There are multiple ways to do this, discussed in this article. Copy and Paste This is the easiest method to get data from Google Sheets into FlexSim. Here's how it works: Open the desired sheet in your browser Click the top-left corner to select everything. Copy the data (use ctrl-C) Open FlexSim Create a Global Table if you haven't already Ensure the number of rows and columns in the Global Table is large enough to hold the pasted data. Click on the column header for the first row in the Global Table. Paste the data (use ctrl-V) Pros: Quick, easy Cons: Need to resize the global table correctly beforehand, repeat entire process if data changes. Export/Import via CSV This is also any easy method to get data. Here are the steps: Download your sheet as a csv file. In FlexSim, use the importtable() command to dump the csv into the global table. For example: importtable(Table("GlobalTable1"), "data.csv", 1) You could add this code to your model's OnReset trigger if desired. Pros: Quick, table sized to csv data automatically Cons: Repeat downloading csv if the data changes. Export/Import via XLSX You can also download a google spreadsheet as an Excel file. Then you can use the Excel importer as normal. Pros: Quick, table sized to data automatically, many options for configuring Cons: Repeat downloading xlsx file if the data changes Import via Python This method is more advanced and requires some configuration for the model and your Google account. Once complete, however, changes can be pulled in automatically without any manual steps. Follow the Sheets quickstart for python found here: https://developers.google.com/sheets/api/quickstart/python Following this guide walk you through creating a Google Cloud Project and creating credentials for that project. In addition, consider using this modified python file instead. This file creates a get_values method that the model can call, and that method is also called from main(), so it's easy to test in a python debugger: import os.path from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.errors import HttpError # If modifying these scopes, delete the file token.json. SCOPES = ["https://www.googleapis.com/auth/spreadsheets.readonly"] # The ID and range of a sample spreadsheet. SAMPLE_SPREADSHEET_ID = "----- add your sheet's ID here -------------" SAMPLE_RANGE_NAME = "A1:B" def get_values(): """Shows basic usage of the Sheets API. Prints values from a sample spreadsheet. """ creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists("token.json"): creds = Credentials.from_authorized_user_file("token.json", SCOPES) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( "credentials.json", SCOPES ) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open("token.json", "w") as token: token.write(creds.to_json()) try: service = build("sheets", "v4", credentials=creds) # Call the Sheets API sheet = service.spreadsheets() result = ( sheet.values() .get(spreadsheetId=SAMPLE_SPREADSHEET_ID, range=SAMPLE_RANGE_NAME, valueRenderOption="UNFORMATTED_VALUE") .execute() ) values = result.get("values", []) return values except HttpError as err: return [] def main(): values = get_values() if not values: print("No data found.") return for row in values: print(row) if __name__ == "__main__": main() Save the above script next to your model. Create a user command in your model. Format the user command for python and enter the file name and method name. It might look something like this: /**external python: */ /**/"sheets"/**/ /** \nfunction name:*/ /**/"get_values"/**/ The return type of the command should be var which means any Variant type. Use code like the following to clone the data to a global table: Array values = getValues(); // call the user command. Array colHeaders = values.shift(); for (int i = 1; i <= values.length; i++) { Array row = values; row[0] = nullvar; } Table(values).cloneTo(Table("GlobalTable1")); Add the above code to a reset trigger. Pros: automatic once complete, easy to keep data up-to-date Cons: requires complicated setup, some python coding. The script could be adjusted to download additional ranges, and then return all data at once, but that requires some code ability. Import via HTTPS Google recommends you use a client library to access its APIs. However, it is entirely possible to use HTTPS requests instead. This could all be done from FlexScript, with no additional installations required. Pros: done all from FlexScript, no extra installs Cons: very technical Conclusion There are several ways to extract data from Google Sheets into FlexSim. Each has pros and cons. Choose the one that best fits your circumstances. Good luck!
View full article
One of the most powerful features of Process Flow is the ability to easily define a Task Sequence. However, many real-life situations require the coordination of multiple workers and machines to do a single task. This article demonstrates one approach you can use with Process Flow to coordinate multiple Task Executers, or in other words, to create a coordinated task sequence. This article talks about an example model (handoff.fsm). It might be easiest to open that model, watch it run, and perhaps read this article with the model open. The Example Scenario Here is a screenshot of the demo model used in this article: Items enter the system on the left. The yellow operator must carry each item to the queue in the middle, and then wait for the purple operator to arrive. Once the two operators are both at the middle queue, then the yellow operator can unload the box, and the purple operator can take it. After this point, the yellow operator is free to load another item from the left queue. The purple operator takes the item, waits for a while, and then puts the item in the sink on the right. The interesting part of this model is the hand off. The yellow operator must wait for the purple operator, and vice versa. This is the synchronization point, and it requires coordination of both operators. The approach used in this model allows you to add more operators to the yellow side, and more to the purple side. But it still maintains that a yellow operator must wait for a purple operator before unloading the box. The Example Model In addition to to the 3D layout shown previously, there are 5 process flows in the example model. The first is a general flow, and defines the logic for each task. The second is a Task Executor flow, and defines the logic for the yellow operator. The third is also a Task Executor flow, and defines the logic for the purple operator. The remaining two flows are synchronization flows, for synchronizing between the other three flows. Synchronizing on a Task The basic approach in this model uses the Synchronize activity. This activity waits for one token from each incoming connector, before it allows any of the tokens to move on. Here is the Yellow Purple Sync flow (a global Sub Flow) from the example model: The flows for the yellow and purple operators each use the Run Sub Flow activity to send a token to this flow, to their respective start activities (you can use the sampler on the Run Sub Flow activity to sample a specific start activity in a sub flow). This is what allows both the yellow and purple operators to wait for each other. However, it is important that the yellow and purple operators are both doing the same task. In this model, there is a token that represents each item that needs to be moved. Both operators get a reference to this task token. The Synchronize activity is set up to partition by that Task token. That means that a yellow and purple operators must both call this sub flow with the same task token, ensuring that each task has its own synchronization. In the example model, this kind of synchronization happens between operators, and it happens between each task and an operator. Basically, the task must wait for the operator to finish that operator's part. The Task Flow A task token is created every time an item enters the first queue. The tasks flow puts that task on both the Yellow and Purple lists. In both cases, the task token does not wait to be pulled, but keeps itself on the list. Then, the task token waits for a yellow operator to finish with it, and then for the purple operator to finish with it. There is a zone in this flow, but its only purpose it to gather statistics for how long the whole task took. The Yellow and Purple Flows These flows are easiest to understand when viewed side by side: Recall that each task is put on both the Yellow and Purple lists at the exact same model time. The yellow operator waits to get a task (at the Get Task activity). Then the operator travels to the first queue, gets the item, and travels to the second queue. At this point, the yellow operator waits. At the same time, the purple operator is also waiting for the task. The purple operator just has to travel to the second queue before waiting for the yellow. Once the yellow operator arrives, the purple operator also has to wait for the yellow operator to unload the box. On the yellow side, once the purple operator arrives, the yellow operator unloads the box, and then synchronizes with the purple operator, allowing the purple operator to load the box. Summary The purpose of this article is to show one method for synchronizing token in separate flows. That method is as follows: Have a token for each task. As each task executor (or fixed resource) needs to synchronize, they each use a Run Sub Flow activity, putting the token in a specific Start activity. The Sub Flow (a global Sub Flow) has a synchronize activity, that requires a token from each participant for that task before releasing the tokens. This is certainly not the only way to create this model. However, there are some advantages: By forcing the task to synchronize, you can gather stats on how long each phase of the task took, as well as how long the complete task took. You can add more yellow or purple operators by copy/paste. They simply follow their own logic Each set of logic is separated; tasks, yellow operators, and purple operators each have their own flows, making each one much simpler. The exact approach used in the example model will not work exactly as it is for each model. However, you can apply the general principles, and adapt them to your own situation.
View full article
In this example model you'll see two identical elevator setups. However, you will notice that ElevatorBank1 allows the patient to move to the next floor properly, whereas ElevatorBank1_2 will float the patient up the network node instead of using the elevator. There are a few steps you must follow to ensure you will have a properly working elevator in your model. Make sure everything is working the way you intended, without an elevator. Now you can add in the elevator, select it and then check the 'Connect to Path' box as seen below Now ensure that the elevator is connected to the nearest path node and any nodes above it. One thing to note is that after resetting if you click on any of the Path nodes that the elevator is connected to you will see that the On Arrival Trigger now says Send Message to Request Elevator. This is the code that actually calls the elevator when a patient arrives at the node. The elevator automatically adds this to the nodes connected to when resetting, but this trigger option can be added to any node. Another good practice, especially if patients walk by the elevator without always using it, is to make a separate node off on a spur. That way patients aren't triggering the elevator every time they walk by.
View full article
The attached model contains a basicTE to mimic some operations of a Tower Crane. You should be able to use it like any other task executer. Labels on the crane allow the speeds and operating heights to be altered. To change the jib/beam length use the label parameter and it will apply at reset. Similarly, to change the height for now just change the tower height and press reset to have the rest attached at the correct height. TowerCrane_basicTEexample.fsm Update: Added a user library that will scale the crane based on the model units. Also changed some labels so that rotational speed is specified there and the jib/beam now uses the object properties for max speed and acceleration. TowerCrane.fsl
View full article
En este video van a aprender cómo establecer diferentes tasas de llegadas según la hora del día utilizando el objeto Source. Para más videos tutoriales pueden acceder al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips.
View full article
Top Contributors