FlexSim Knowledge Base
Announcements, articles, and guides to help you take your simulations to the next level.
Sort by:
En este video van a aprender cómo establecer tiempos de proceso por lotes usando el objeto Processor. Para más videos tutoriales pueden acceder al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips.
View full article
En este video van a aprender cómo establecer diferentes tasas de llegadas según la hora del día utilizando el objeto Source. Para más videos tutoriales pueden acceder al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips.
View full article
The attached model contains an object process flow for a basicFR to sit between a regular conveyor and a Mass Flow. It looks at the interval between the discrete items leaving the regular conveyor and creates a new generative rate when it detects a change, allowing you to convert a stream of hundreds of items to singular events for the MFC as needed and capitalize on the advantages of MFCs. It's a simple process flow: To use just connect a BasicFR between the two conveyors and add it to the object process flow as a member instance. This is a proof of concept and does not currently handle accumulation across the interface, or aggregation from multiple discrete conveyors. MFC_RateMaker.fsm
View full article
If you've ever tried to nest groups of objects inside a hierarchy of planes, you may find the drawing of the planes suboptimal and lacking information: Using the container (modified plane) in the attached user library you can represent the container with just an outline. A settings dashboard is installed with the library along with some user commands and global variables. Corner prisms show the nesting layers under the prism: The option 'Use the container center' allows you to use it as either a plane as before or, when unselected, a bordered frame where dropping an object or clicking and dragging within the borders will behave as though you are dropping onto or clicking/dragging the model floor. You can also choose to hide the containers entirely for the cleanest visuals. I hope this will encourage users to use containers more, since when coupled with Templates and Object Process Flows they can increase the scaleability and make your developed assets more manageable. ( In those cases the container becomes the member instance of the process flow or template master and references to its components are made through pointer labels on the container rather than names which you may want to alter for reporting purposes. The pointer labels are updated automatically when creating a new instance of the container.) ContainerMarkers_v1.fsl If you want planes you already have in your model to adopt this style just add this to their draw code: return containerdraw(view,current);
View full article
This article is an updated version of a previous article by @Paul Toone. Thanks to Paul for the original, which you can read here: https://answers.flexsim.com/articles/23118/flexsim-xml-18.html In FlexSim, you can save any tree (including models and libraries) in XML format. There are many advantages to using this capability in your model development, including: Since XML is an ascii/text based format, it increases the utility of using content management and versioning software such as Subversion, CVS, Microsoft Visual SourceSafe, git, etc. to manage the development of a model. These systems automatically track a history of changes to a model or project, and for ascii/text based files, they allow you to see line-by-line change logs for the files, as well as revert line-item changes if needed. With the added benefit of versioning systems comes the ability to develop a model concurrently by different modellers using a much more stream-lined method. No more saving off individual t files from one modeller's changes to a model and loading them manually into the master model. When saving to XML format, if one modeller is working on one portion of the model, only that portion of the model file changes when saved, so when the modeller checks his changes into the versioning system's repository, another modeller can automatically merge those changes into his copy. If there are conflicts, where two modellers change the same part of the model, then the versioning system has tools that allow modellers to easily see in the XML file where the conflicts occur and quickly perform the manual conflict resolution. FlexSim also has the added ability to distribute a single model across multiple XML files. This increases your control over how to manage the model development in your versioning system, and makes conflict resolution even easier. The distributed save mechanism also opens the door for much more automated control of FlexSim. By saving small pieces of your model off into separate XML files, you can auto-regenerate one or more of those XML files outside of FlexSim, consequently changing the configuration of the model for the purpose of running different scenarios. Tutorial This tutorial will take you through saving a model as XML, and how you can configure the distributed save. Build a Simple Model First build a simple example model containing a Source, Queue, Processor and Sink. Save in XML Format Save the model, and in the save dialog, choose "FlexSim XML" from the drop-down at the bottom. Save the model as "PostOfficeXML". Now you've saved the file in FlexSim's XML format. You can view the file in a regular text editor like Visual Studio Code, Notepad++, etc. It's not necessary to know all the details of the file format. In simple terms, the primary tag in FlexSim XML is the <node> tag, representing a node in FlexSim. The node's name is described in the <name> tag, and the node's data, if applicable, is described in the <data> tag. If you plan on doing automated changes, and need a more detailed definition of the xml format, you can refer to FlexSim's xml schema (see the FlexSim install directory/help/MiscellaneousConcepts/FlexSimXML.xsd for more information). Version Management with Git At FlexSim, the development team uses Git to manage collaboration between developers. Part of development includes updating the MAIN and VIEW trees. In a similar way, you can use Git to collaboratively update the MODEL tree. The standard way to use Git is as follows: Set up a remote repository for the project. There are many online services for this, including GitHub, BitBucket, and many others. Each one has different pricing and terms of use. It is also possible that your company already runs a Git hosting service where you could create your remote repository on a company server. Once you have a remove repository, each user "clones" that repository to their local computer. Everyone "pulls" the latest changes from that repository and "pushes" their changes to that repository. To manage pushing and pulling, you can use Git on the command line. However, most users interact with Git through a client. One free one is called Git Extenstions: http://gitextensions.github.io/ By itself, version management is a deep topic. You might consider reading some tutorials to become more familiar with Git concepts: https://git-scm.com/docs/gittutorial https://github.com/git-guides https://github.com/skills/introduction-to-github https://www.atlassian.com/git/tutorials/learn-git-with-bitbucket-cloud Managing Changes with Git Let's assume you have a repository with the model file already in it. Let's make a change by moving the queue and saving the model: Once you save the model, Git can show you the changes you've made. In this case, you can see the spatialx and spatialy in your Git client: At this point, you could commit and push your changes. Others could then pull the changes into the model. Notice that git looks at changes in terms of lines being added and removed. In this case, we removed two old lines and added two new lines. Distributed Save As models get bigger and more complex, it can be helpful to split a single model file into multiple XML files. To do this you create a "FlexSim File Map" file. This is an xml file with the .ffm extension. It must be placed in the same directory as the model's .fsx file, and, except for the extension, must be given the same name as the .fsx file. So, let's create that file. Let's also create a subdirectory to put the distributed files into. Now edit PostOfficeXML.ffm in a text editor. The first thing we'll do is put the node MODEL:/Tools/ Workspace into a different file. This is something you'll probably want to do always if you're using version management. The node MODEL:/Tools/Workspace stores all of the windows open in the model, so if you're editing a model and change or reposition the windows that are open, that will change the model file. For version management this is often just static that doesn't need to be tracked, so we'll have the node saved off to a different file, and then we can just never (or at least rarely) check that file into the version management system. Specify the following code: <?xml version="1.0" encoding="UTF-8"?> <flexsim-file-map version="1"> <map-node path="/Tools/Workspace" file="distributedfiles/windows.fsx" file-map-method="single-node"/> </flexsim-file-map> Save the file map, then go back into FlexSim. Save the post office model again under the same name "PostOfficeXML.fsx". Now look in the distributedfiles directory. You'll see that it contains a new xml file named windows.fsx. This xml holds the definition of the node MODEL:/Tools/Workspace. All interaction with the model remains the same, i.e. from FlexSim you just load and save the main xml model file, but now the model is distributed across multiple files. If you look a the change to PostOfficeXML.fsx in your git client, you'll see that many lines have been replaced with a single line specifying the other file: This shows how much of the content of PostOfficeXML.fsx has been moved to distributedfiles/windows.fsx. In the file map, the main document element is the <flexsim-file-map> tag. Inside the main document element should be any number of <map-node> elements. Each <map-node> element should have a path attribute, a file attribute, and a file-map-method attribute. The path attribute should specify the path, from the main saved node, to the node that is going to be saved into the distributed file. The file attribute specifies the path, relative to the saved model file, to the distributed file that you want to save. The file-map-method should either have a value of "single-node" or "split-points". In this case "single-node" means you want to save all of the active node into windows.fsx. Now let's do an example of the "split-points" method. Let's say we want to save the Tools folder in one file, the Source and Queue into another file, and the Processor and Sink in a third file. To do this, add another <map-node> element to the file map: <?xml version="1.0" encoding="UTF-8"?> <flexsim-file-map version="1"> <map-node path="/Tools/active" file="distributedfiles/windows.fsx" file-map-method="single-node"/> <map-node path="" file="distributedfiles/Tools.fsx" file-map-method="split-points"> <split-point name="Source1" file="distributedfiles/Source_Queue.fsx"/> <split-point name="Processor3" file="distributedfiles/Processor_Sink.fsx"/> </map-node> </flexsim-file-map> Now save the file, save your FlexSim model, and again you'll see new files created in the distributedfiles directory. If a element uses the "split-points" method, then it can have sub-elements. Each split-point should have a name, defining the name of a node inside the map-node's defined node, and a file attribute defining the file to save to. The map-node has a path="" attribute. This means that it applies to the root node, or the model itself. The map-node's file attribute defines the first file to use. This configuration tells FlexSim to save all sub-nodes in the model up to but not including the node named "Source1" to the file "distributedfiles/Tools.fsx", save everything from Source1 up to but not including the node named "Processor3" in "distributedfiles/Source_Queue.fsx", and save everything from Processor3 to the end in "distributedfiles/Processor_Sink.fsx". Why Distribute? The main reason you would want to distribute your model across multiple files is for version management. It allows you to easily see what you've changed in your model by seeing what files, and consequently which parts of the tree, have changed. If you inadvertently changed one piece of the model, you can easily see that and then revert that change if needed. When multiple modellers are developing the model, one modeller can essentially confine himself to one part of the model tree, and by associating that part of the tree with a specific file, it makes it much easier to merge changes from different developers, it reduces the chance of conflicts in the merge, and makes it easier to do a manual merge if needed. Note on connections: FlexSim's XML save mechanism does have one catch regarding input/output/center port connections, as well as any other mechanism that uses FlexSim's coupling data. If you change connections in one portion of your model, it will actually change the serialized values of all connections/coupling data that occur after those connections in the tree, even if those connections were not changed. This can very easily cause merge issues if multiple modellers are changing connection data. However, if you distribute the model across multiple files, connection changes where both connection end-points are within the same file will only affect that file, and will not change other files. So if you can localize large "connection sets" into individual files, you can minimize the effect of changes and subsequently minimize merge conflicts.
View full article
Engage with the FlexSim community here on the FlexSim forum boards. Check out our learning resources. Customers with current licensing can request direct technical support from their FlexSim representative, via phone, email, web meeting, or support ticket.  
View full article
This article describes an example of Reinforcement Learning being used to solve scheduling problems. See the model and python files in the attached zip file. SchedulingRL.zip Problem Description This model represents a generic sheet metal processing plant. There are four machines in series. Each job requires time on all four machines. Jobs come in batches of 10. A poor sequence of jobs will cause blocking between items, lowering throughput. If the time between batches is long, such as a shift or a day, you could use the optimizer to determine the best sequence. If the time between batches is short, however, the optimizer may not be feasible. For real sequencing problems, the time to find a good sequence can be anywhere from 5 minutes to an hour, or even longer. This makes it impractical for high-velocity situations. The attached model requests a decision every time the first machine in the series is available. The only action is an index for the Nth available job. So the decision can be interpreted as "which job should I do next?" Solution The general solution is to use reinforcement learning. However, this problem required customized python scripts: The model uses custom parameters for observations. This allows arbitrary values for observations. The model uses a custom observation space. The observations include a table of the required times at each station for the remaining jobs. They also include an array of the in-progress jobs and their predicted remaining times. By using a Dict space, the python scripts can combine all the observations into a single space. The model uses an Action Mask. An Action Mask is a binary array with one value per value of the action. This tells the RL algorithm about invalid options. The python scripts require the sb3-contrib package. Use pip install sb3-contrib to install it. Results After training for 500k time-steps, the agent learns to choose jobs moderately well. If you run the inference script, you can use the experimenter to compare a random policy to a trained agent:
View full article
At the current release state 2024.2.2 FlexSim does not provide a tool to export an animated USD stage of a model. There is a way around though by recording the FlexSim model connected to a NVidia Omniverse live session using one of the tools in NVidia’s USD Composer, the Animation Stage Recorder. This is a step-by-step tutorial to guide you through the process. It is assumed, when you are interested in this you already know how to connect FlexSim to a live session in NVidia Omniverse. This may not be ther perfect way to go, but it worked for me. Record_Animated_Stage_from_FlexSim.pdf
View full article
FlexSim's Webserver is a query-driven manager and communication interface for FlexSim. It allows you to run FlexSim models through a web browser like Google Chrome, FireFox, Internet Explorer, etc. Since the FlexSim Web Server is a basic service to allow FlexSim to be served to a browser, you may decide you want a way to proxy to this service through a full service web server that you can control security and authentication through. This guide will walk you through proxying to the FlexSim Web Server through Apache web server. Install the FlexSim Web Server Program Download and install the FlexSim Web Server from https://account.flexsim.com Edit C:\Program Files (x86)\FlexSim Web Server\flexsim webserver configuration.txt Change the port from 80 to 8080 Start the FlexSim Web Server by double clicking flexsimserver.bat Test the server by going to http://127.0.01:8080 It should look like this: Install Apache Web Server for Windows Download Apache x64 from https://www.apachehaus.com/cgi-bin/download.plx Extract the httpd-<version>.zip file you have downloaded Go into the httpd-<version> folder you have extracted and copy the Apache24 (or Apache25) folder to C:\ Install Apache Dependencies Make sure you have the Microsoft Visual C++ 2008 SP1 package installed. You can get it here: https://www.microsoft.com/en-US/download/details.aspx?id=26368 Download the vcredist_x64.exe package and run the installation Configure Apache Open C:\Apache24\conf\httpd.conf in a text editor Look for the following lines in this configuration file and remove the # character #LoadModule proxy_module modules/mod_proxy.so #LoadModule proxy_http_module modules/mod_proxy_http.so #LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so Those modules should now look like this: LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so At the bottom of the httpd.conf file, add these 3 lines and save the file: ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ AllowEncodedSlashes On Run Apache Web Server In a file explorer or CMD line prompt, browse to the C:\Apache24\bin folder and run httpd.exe Now that you have the FlexSim Web Server proxied through Apache, you may decide you want to configure Apache to handle security, authentication and customization. Since this is out of the scope of this guide, you can find details on the Internet that can guide you to setting these customizations up. A few resources you may consider: https://community.apachefriends.org/f/ https://stackoverflow.com
View full article
FlexSim's Webserver is a query-driven manager and communication interface for FlexSim. It allows you to run FlexSim models through a web browser like Google Chrome, FireFox, Internet Explorer, etc. Since the FlexSim Web Server is a basic service to allow FlexSim to be served to a browser, you may decide you want a way to proxy to this service through a full service web server that you can control security and authentication through. This guide will walk you through proxying to the FlexSim Web Server through Nginx web server. Install the FlexSim Web Server Program Download and install the FlexSim Web Server from https://account.flexsim.com Edit C:\Program Files (x86)\FlexSim Web Server\flexsim webserver configuration.txt Change the port from 80 to 8080 Start the FlexSim Web Server by double clicking flexsimserver.bat Test the server by going to http://127.0.01:8080 It should look like this: Install Nginx Reverse Proxy From a browser, visit http://nginx.org/en/download.html Download latest stable release for Windows Extract the downloaded nginx-<version>.zip Rename the unzipped nginx-<version> folder to nginx Copy the nginx folder to C:\ Double click the C:\nginx\nginx.exe file to launch Nginx Test Nginx by going to http://127.0.0.1 It should look like this: Configure Nginx to proxy to the FlexSim Web Server Open C:\nginx\conf\nginx.conf in a text editor Find the section that says: location / {    root html;    index index.html index.htm; } Edit out the root and index directives and add a proxy_pass directive so it appears like this: location / {       proxy_pass http://127.0.0.1:8080;    #root html;    #index index.html index.htm; } Save the nginx.conf file Reload Nginx to Apply the Changes Open a command line window by pressing Windows+R to open "Run" box. Type "cmd" and then click "OK" From the command line windows, type the following to change to the nginx directory: C:\nginx>cd C:\nginx and press enter Now, type the following to reload Nginx: C:\nginx>nginx -s reload Test the FlexSim Web Server Being Proxied by Nginx From a browser window again go to http://127.0.0.1 You should now see the FlexSim Web Server interface proxied through Nginx Now that you have the FlexSim Web Server proxied through Nginx, you may decide you want to configure Nginx to handle security, authentication and customization. Since this is out of the scope of this guide, you can find details on the Internet that can guide you to setting these customizations up. A few resources you may consider: https://forum.nginx.org https://stackoverflow.com
View full article
In the attached model we use a Timetable and two MTBF/MTTR objects to define Schedule Loss, Availability Loss (breakdowns) and an element of Performance loss due to short stops (state Down). The processor sends 'bad' items to port 2 based on the send to percentage which account for QualityLosses. The processor's 'best' processing time per part (5 seconds) is stored as a label, while the processing time itself is a triangular distribution with the minumum as 5 seconds - so it also contributes to performance loss. When the Type of the item changes a setup time occurs which is the final contributor to performance loss. Two state profiles were added to the processor - one to track production time and another for availability. An object process flow on the processor detects production profile state changes (between On and off shift) and regular Flexsim state changes and determines the availability state that should prevail. A user command getOEEstat is used to access the values which it calculates on demand and stores in a label on the processor called statsMap. The syntax for this command is: getOEEstat(myMachine,"OEE") The list of stats: "ScheduleLoss","AvailabilityLoss","PerformanceLoss","QualityLoss","IdealProdTime","AvailabilityRatio", "QualityRatio","PerformanceRatio", "IdealProdTime", "RunTime", "OEE", "TEE". A group was used to indicate which objects have their OEE tracked, and a stats Collector reads the group members and adds rows at reset. Finally Performance Measures were added for the stats for processor 1. Processor_OEE_2.fsm 2023-08-22 Update: Added 'TEE' stat.
View full article
ContinuousUtilization_4.fsm In the attached model, you'll find a Utilization vs Time chart driven by a Statistics Collector and a Process Flow. This article is an alternative approach to the chart shown here: https://answers.flexsim.com/content/kbentry/158947/utilization-vs-time-statistics-collector.html Thanks @Felix Möhlmann for putting that article together. The concept in this model is very similar, but this is a different approach. There are pros and cons to each method. This method offers some performance improvements at the cost of writing more FlexScript code. This method also doesn't handle Warmup time, where the other method does. The general idea in this model is to keep a history of state changes for the previous hour (or other time interval). The history adds new info as states continue to change and drops old info as it "expires" by being older that the time interval. I chose to use a Bundle (stored on a token label) for the history. Bundles are optimized to add data to the end. If a bundle is paged (they are by default), then they are also optimized to remove data from the beginning. [Begin Technical Discussion - TLDR; Bundles are fast at removing the first row and adding new rows] A paged bundle keeps blocks of memory (pages) for a certain number of rows. If a page fills up, it allocates another page. It keeps track of the location in the page for the next entry. Similarly, it keeps track of the location of the first entry on the start page. If you remove the first row, that start location is moved forward on the page. If the start location gets to the end of a page, that page is dropped and the start location moves to the next page. [End Technical Discussion] The Process Flow maintains the history table. Whenever the object's state changes, the Process Flow adds a new entry to the history table. This part is straightforward. The tricky part is properly "expiring" the data. To do this, a second set of tokens wait for the last of the oldest data to expire. When the oldest set of data should expire, the tokens remove any rows that are too old and then wait for the next oldest row. If there are no expired rows, the tokens wait for one "time interval" and then check again. This is because if there is no expired data, data can't expire for at least one time interval. In this way, the history table is always kept up to date. The Statistics Collector is configured to post-process the history table. It calculates the total utilization, including the state that the object is currently in, as well as the state being "phased out" from the history table. The Statistics Collector can do this calculation at any point in the model. So the sampling interval (called Resolution in the model) is independent from the time window.
View full article
Good practice to reduce variance when experimenting is to separate streams of things that might vary in the model so that the random sampling is independent. An example might be that you have a number of processors who are members of the same breakdown profile (MTBF/MTTR object) where the individual breakdowns are dependant on the state of the processor. If during one scenario a processor is used more than before then it may sample the duration and next breakdown earlier, and therefore change the sequence with the other machines sampling of breakdown times, increasing variance. This is because the default setting for the MTBF time fields are using 'getstream(current)' - which means a single stream for the MTBF object, shared across all members. You could try to change this in the MTBF by using 'getstream(involved)' where 'involved' refers to the breakdown member machine. This causes other problems since if you're sampling processing times using the machine's stream too, then the amount of items processed will again change the breakdown times samples. You may judge this to be acceptable, but in a ideal world you'd still want separate streams and may want multiple streams for setup, processing, breakdowns, or subsystem failures. One way to accomplish this is by changing the way getstream() works such that it can generate a stream for any value you pass to it. That might be an object, as the current getstream() accepts, or it could be the string name of the object or it's path. It could also be an array which then opens a number of possibilities: In a breakdown you could replace getstream(current) with getStream([current,involved])   //generates a unique stream number for the MTBF/machine pair* In an Object Process Flow you could replace getstream(activity) with: getStream([current,activity]) // generates a unique stream for the instance and activity pair and works for the general process flow too. For a processing time on a processor instead of getstream(current) you could use getstream([current,"Processing"]) and getstream([current,"Setup"]) to generate two seperate sampling streams. The attached library contains an auto-installing user command that overrides getstream() to provide this functionality. The stream values save with the model. getStream-byvariant3.fsl * This implementation does have some limitations since during an experiment it does not communicate back the master model when trying to create new streams. For this reason you'll want to try and have all possible streams set up before running an experiment or avoid the type of actions that dynamically create the requirement for new streams - so that might be keeping all possible fixed resources and task executers, and hiding/removing them from groups rather than destroying them as the OnSet options of the parameters table do currently. Alternatively if you consistently name the dynamically created instances then the MTBF stream expression could be: getStream([current, involved.name]) Update: I've edited this post and library to use getStream (capital 'S') since the override parameter (var thing) doesn't stay in place and eventually causes FlexScript build errors. So with the updated library you'll need to find/replace from the model tree 'getstream' with 'getStream'.
View full article
Hi everyone, recently @Arun Kr posted this idea to add a utilization vs. time chart to the available chart types in FlexSim. I had previously build a relatively easy to set up Statistics Collector for use in our models. I have since cleaned up the design a bit and thought to post it here, since this seems to be a commonly desired feature. utilization_vs_time_collector_24_0_fm.fsm All necessary setup is done through labels of the collector. The first three are actually identical to labels found on the default Statistics Collector behind a state bar chart. - Objects should point at a group that contains all objects the collector should track the utilization for. - StateTable is a reference to the state table that will be used to determine which state counts as 'utilized'. - StateProfile is the rank of the state profile that should be read on the linked objects (0 for default state profile). - MeasureInterval is the time frame (in model units) over which the collector will take the average of the utilization. - NumSubIntervals determines how often that measurement is actually taken. In the example image above (and the attached model) the collector measures the average utilization over the last 3600s 12 times within that interval of every 300s. Each meausurement still denotes the utilization over the complete "MeasureInterval". The graph on the left takes a measurement every 5 minutes, the one on the right every 60 minutes. Each point on both graphs represents the average utilization over the previous hour from that time point in time. - StoredTimeMap is used to allow the collector to correctly function past a warmup time, by storing the total utilized value of each object up to that point. This should no be manually changed. Since this last label has to be automatically reset, remember to save any changes made to the other labels by hitting "Apply". The collector works by keeping an array of 'total utilized time' value for each object as row labels. Whenever a measurement is taken, the current value is added to the array and the oldest one is discarded. The difference between the newest and oldest value is used to calculate the average utilization over the measurement interval. The "NumSubIntervals" label essentially just controls how many entries are kept in that array. To copy the collector into another model you can create a fresh collector in the target model. Then copy the node of this collector from the tree of the attached model and paste it over the node of the fresh collector. I hope this can help to speed up the modeling process for some people (at least until a chart like this is hopefully implemented in FlexSim) or serve as inspiration for how one can use the Statistics Collector. I might update the post with a user library version if I get to creating it (and if there is demand for it). Best regards Felix Edit: Added user library with the collector as a dragable icon to the attached files. Edit2: I noticed a bug while using the collector. Having the tracked objects enter states that are marked as "excluded" in the state table would lead to incorrect utilization values (possibly even below 0 or above 100%). Replaced the library with an updated version that fixes this. Edit3: I fixed another bug that resulted in a wrong utilization value for the first measurement after the warmup time if the object spent time in an excluded state prior to the warmup. utilization-vs-time-collector-library-20250212.fsl
View full article
This is a demo model for the new warehouse functionality found in version 2019 Update 2: warehouse-demo-model.fsm The basic premise of this model is that items of a particular type come in, and must be placed in slots for that type. Orders also come in, requiring items of a particular type, that must be retrieved from storage. The model is meant to be a general concept model. It demonstrates the use of many of the new features in 19.2, and embodies some high-level "how-to's" of warehousing that are discussed in the user manual. Most logic for the model is implemented in a process flow. The process flow logic is separated into three main categories, namely initial inventory, inbound, and outbound processes. Further, the outbound process demonstrates both random-based order generation as well as history-based order generation. Initial Inventory The model includes a Global Table of Initial Inventory. The process flow's initial inventory section reads this table, and then creates items and places them into slots based on that initial inventory. This logic relies on the Address Scheme defined in the Storage System object, and uses direct addressing to get a slot using Storage.system.getSlot(). Inbound I use the process flow to assign a slot to each incoming item. I use an Assign Labels activity called Find Slot to do this. This uses a pick list option that wraps a call to Storage.system.findSlot(). The query matches the Type of the item with the Type of the slot, and also ensures that the target slot has space to fit the incoming item. The query also randomizes the order. Randomizing the order would likely not be necessary in most situations, but it makes the demo look nice. If the Find Slot activity properly finds a slot to store the item, then I go ahead and assign the the item to that slot, and have an operator place it in the rack. Outbound I also use the process flow to generate orders, and to reserve items in the storage system for those orders. In most warehouse simulations, order generation can be driven in two ways. First, you can use random probability distributions to generate orders based on general throughput metrics. Second, order generation can be based on historical data. This model gives an example of each method. In the random method, orders are generated randomly every ~30 seconds. Each order includes a number of SKU line items (again, random) and each line item includes a quantity of that SKU (again, random). Order tokens spawn line item tokens, which in turn spawn tokens associated with individual picks (the Fill Out Individual Picks process). For each pick, the token finds an item in storage that matches the target SKU. This is an Assign Labels activity (Find Item by SKU) with a pick option that wraps a call to Storage.system.findItem(). It finds an item that matches the required type, again using a query. Once the item is found, it makes reserves the item as "outbound" by assigning the Storage.Item.assignedSlot property to null (Set to Outbound activity). This ensures that no other process will find that same item for picking. The history-based order generation process uses much of the same functionality as the random-based, but it instead reads an "OrderHistory" table to determine when orders are started and what those orders contain. The OrderHistory table represents a simplified format for what you would likely see in a standard orders table. First, the process flow creates a transformed table that aggregates each order into a single row (this could technically be done as post-import code, but I do it in the process flow for visibility). Then the process flow loops through that transformed table, waiting for the start time of each order, then spawning that order. Custom Rack Visualization I have also customized the visualization of the racks. I have added a text to the front (and bottom on the floor) of a rack slot that will show the address of that slot. Further I've given the text a background that is color-coded to the SKU that that slot is designated to store. This was all done through the Storage System's Visualizations tab. I customized the Rack visualization.
View full article
FlexSim 2018 includes functionality for creating a custom table view GUI using a custom data source from within FlexSim. This article will go through the specifics of how to set this up. The Callbacks Custom Table Data Source defines callbacks for how many rows and columns a table should display, what text to display in each of the cells, if they're read only etc. If you require further control of your table you can use a DLL or module and sub class the table view data source in C++. An example of how this data source is used can be seen in the Date Time Source activity properties. In the above table, the data being used to display both of these tables is exactly the same. The raw treenode table data can be seen on the right. The table on the left is displaying the hours and minutes for each start and end time rather than the the start time and duration in seconds. In FlexSim 2018, there are a number of tables that currently utilize this new data source. They can be found at: VIEW:/modules/ProcessFlow/windows/DateTimeArrivals/Table/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/StatePieOptions/SplitterPane/States/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/CompositeStatePieOptions/SplitterPane/States/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/StateBarOptions/SplitterPane/States/Table Copying one of these tables can be a good starting point for defining your own table. The first thing you have to do in order for FlexSim to recognize that your table is using a custom data source is to add a subnode to the style attribute of your table GUI. The node's name must be FS_CUSTOM_TABLE_VIEW_DATA_SOURCE with a string value of Callbacks. Set your viewfocus attribute to be the path to your data. This may just be a variable within your table. It's up to you to define what data will be displayed. If you want to have the default functionality of the Global Table View, you can use the guifocusclass attribute to reference the TableView class. This gives you features like right click menus, support for cells with pointer data (displays sampler), tracked variables (displays edit button) and FlexScript nodes (displays code edit button). If you're going to use this guiclass, be sure to reference the How To node (located directly above the TableView class in the tree) for which eventfunctions and variables you can or should define. At this point you're ready to add event functions to your table. There are only two required event functions. The others are optional and allow you to override the default functionality of the table view. If you choose not to implement the optional callback functions, the table view will perform its default behavior (whether that's displaying the cell's text value, setting a cell's value, etc). These event function nodes must be toggled as FlexScript. The following event functions are required: ---getNumRows--- This function must return the number of rows to display in the table. 0 is a valid return value. param(1) - View focus node ---getNumCols--- This function must return the number of columns to display in the table. 0 is a valid return value. param(1) - View focus node The following event functions are optional: ---shouldDrawRowHeaders--- If this returns 1, then the row headers will be drawn. The header row uses column 0 in the callback functions. param(1) - View focus node ---shouldDrawColHeaders--- If this returns 1, then the column headers will be drawn. The header column uses row 0 in the callback functions. param(1) - View focus node ---shouldGetCellNode--- Allows you to decide whether you want to override the table view's default functionality of getting the cell node. param(1) - The row number of the displayed table param(2) - The column number of the displayed table If shouldGetCellNode returns 1 then the following function will be called: ---getCellNode--- Return the node associated with the row and column of the table. param(1) - The row number of the displayed table param(2) - The column number of the displayed table ---shouldSetCellValue--- Allows you to decide whether you want to override the table view's default functionality of setting a cell's value. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldSetCellValue returns 1 then the following function will be called: ---setCellValue--- Here you can set your data based upon the value entered by the user. param(1) - The row number of the displayed table param(2) - The column number of the displayed table param(3) - The value to set the cell ---isCustomFormat--- Allows you to decide whether you should define a custom text format for the cell. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If isCustomFormat returns 1 then the following functions will be called: ---getTextColor--- Return an array of RGB components that will define the color of the text. Each component should be a number between 0 and 255. [R, G, B] for example red is [255, 0, 0]. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The text to be displayed in the cell param(5) - The desired number precision ---getTextFormat--- Return 0 for left align, 1 for center align and 2 for right align. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The text to be displayed in the cell param(5) - The desired number precision ---shouldGetCellColor--- Allows you to decide whether you should define a color for the cell's background. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldGetCellColor returns 1 then the following function will be called: ---getCellColor--- Return an array of RGB components that will define the cell's background color. Each component should be a number between 0 and 255. [R, G, B] for example red is [255, 0, 0]. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table ---shouldGetCellText--- Allows you to decide whether you want to override the table view's default functionality of getting a cell's text. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldGetCellText returns 1 then the following function will be called: ---getCellText--- Return a string that is the text to display in the cell. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The desired number precision param(5) - 1 if the cell is being edited, 0 otherwise ---shouldGetTooltip--- Allows you to decide whether a tooltip should be displayed when the user selects a cell in the table. param(1) - The cell node as defined by getCellNode param(2) - The row number of the selected cell param(3) - The column number of the selected cell If shouldGetTooltip returns 1 then the following function will be called: ---getTooltip--- Return a string that is the text to display in the tooltip. param(1) - The cell node as defined by getCellNode param(2) - The row number of the selected cell param(3) - The column number of the selected cell ---isReadOnly--- Return a 1 to make the cell read only. Return a 0 to allow the user to edit the cell. param(1) - The row number of the displayed table param(2) - The column number of the displayed table
View full article
The attached model provides an example of how to record and display overtime hours worked by staff in a small clinic. The following screen capture of the model after a 30 day run shows the various dashboards displaying information about the working hours of the staff in the model. The code snippet shown for the Activity Finished Trigger of the final exit activity in the patient track is used to record information on a couple global variables and then trigger a recording event on the OT_DataCollector. The code snippet is documented in detail, so hopefully you will understand what is being done. As you can see by the Data Collector properties window, nothing special is going on there except to record the two global variables named GV_Overtime_Hours and GV_TotWork_Hours. The third and final piece to the puzzle is to create some User Defined dashboard widgets to display the information captured by the data collector in a few different ways. clinic-overtime-example-with-custom-data-collector.fsm
View full article
Attached is an example DLL Maker project that calls a Java function using the Java Native Interface (JNI) from C++. The steps for how this works are outlined below: 1. Install the Java Development Kit (JDK). In creating this example, I used OpenJDK installed via Microsoft's special Installer of Visual Studio Code for Java developers. 2. Write a Java program. In my example, I created a simple Hello class with an intMethod() as described in IBM's JNI example tutorial. 3. Compile the .java file into a .class file. I did this using the Command Prompt and executing the following command: javac Hello.java 4. Configure the DLL Maker project to include the JNI library: In VC++ Directories > Include Directories, add the following two directories: C:\Program Files (x86)\Java\jdk1.8.0_131\include C:\Program Files (x86)\Java\jdk1.8.0_131\include\win32 In Linker > Input > Additional Dependencies, add jvm.lib In Linker > Input > Delay Loaded Dlls, add jvm.dll 5. Include jni.h in your code. (Line 51 of mydll.cpp) 6. Create a JVM in your code. (Lines 61-102 of mydll.cpp) 7. Get a handle to the method you want to call and then call it. (Lines 118-126 of mydll.cpp) 8. Connect a User Command in FlexSim to fire that C++ code that executes Java code. See the dll_maker_test_model.fsm included in the attached zipped directory. * Note: this example was built with 32-bit FlexSim because the JDK I installed was 32-bit. Using a 64-bit JVM is beyond the scope of this simple example and is left as an exercise for the user.
View full article
In this phase You will be introduced to shared assets, specifically, zones. Post Office: Phase 3 Purpose Learn about Zone Shared Asset Description Restrict access to the waiting line queue so that only 5 Customers can be present at a time. Follow Up What happens to the 'Customers' that show up when the Zone is at capacity? How can we add Customer behavior that says if the waiting line is already full, they leave the post office? Video of Post Office: Phase 3 build: If you'd like to download the completed Post Office: Phase 3 model, you will find it here.
View full article
In this phase You will be introduced to Tasks, Tasksequences and Task Executors. In order to build Phase 2, you will need to start by having Post Office: Phase 1 pulled up in FlexSim. Tasks and Tasksequences Task A single instruction or action to be performed by a TaskExecuter object (ex. LOAD flowitem) Tasksequence A series of tasks to be performed in sequence. (example tasksequence with tasks below) Post Office: Phase 2 Purpose Modify our model to increase visual appeal. Description Create visuals so that the "Customer" flowitem looks and behaves more like a person standing in line, IE is able to walk to each location we send them to. Follow up How can the use of labels and resources help us easily add additional service desks? Video of Post Office: Phase 2 build: If you'd like to download the completed Post Office: Phase 2 model, you will find it here.
View full article
Top Contributors