FlexSim Knowledge Base
Announcements, articles, and guides to help you take your simulations to the next level.
Sort by:
FlexSim's Webserver is a query-driven manager and communication interface for FlexSim. It allows you to run FlexSim models through a web browser like Google Chrome, FireFox, Internet Explorer, etc. Since the FlexSim Web Server is a basic service to allow FlexSim to be served to a browser, you may decide you want a way to proxy to this service through a full service web server that you can control security and authentication through. This guide will walk you through proxying to the FlexSim Web Server through Nginx web server. Install the FlexSim Web Server Program Download and install the FlexSim Web Server from https://account.flexsim.com Edit C:\Program Files (x86)\FlexSim Web Server\flexsim webserver configuration.txt Change the port from 80 to 8080 Start the FlexSim Web Server by double clicking flexsimserver.bat Test the server by going to http://127.0.01:8080 It should look like this: Install Nginx Reverse Proxy From a browser, visit http://nginx.org/en/download.html Download latest stable release for Windows Extract the downloaded nginx-<version>.zip Rename the unzipped nginx-<version> folder to nginx Copy the nginx folder to C:\ Double click the C:\nginx\nginx.exe file to launch Nginx Test Nginx by going to http://127.0.0.1 It should look like this: Configure Nginx to proxy to the FlexSim Web Server Open C:\nginx\conf\nginx.conf in a text editor Find the section that says: location / {    root html;    index index.html index.htm; } Edit out the root and index directives and add a proxy_pass directive so it appears like this: location / {       proxy_pass http://127.0.0.1:8080;    #root html;    #index index.html index.htm; } Save the nginx.conf file Reload Nginx to Apply the Changes Open a command line window by pressing Windows+R to open "Run" box. Type "cmd" and then click "OK" From the command line windows, type the following to change to the nginx directory: C:\nginx>cd C:\nginx and press enter Now, type the following to reload Nginx: C:\nginx>nginx -s reload Test the FlexSim Web Server Being Proxied by Nginx From a browser window again go to http://127.0.0.1 You should now see the FlexSim Web Server interface proxied through Nginx Now that you have the FlexSim Web Server proxied through Nginx, you may decide you want to configure Nginx to handle security, authentication and customization. Since this is out of the scope of this guide, you can find details on the Internet that can guide you to setting these customizations up. A few resources you may consider: https://forum.nginx.org https://stackoverflow.com
View full article
This demo model shows the type of material handling logic that would be found in Bombay sorter system. This tiered conveyor system has products lined up in rows, then drop onto the next conveyor below while staying as a row. More a proof of concept than a fully-featured sample model, FlexSim users can use this as a springboard for more complex horizontal loop conveyor systems. A Bombay sorter (also known as a flat sorter) is a horizontal loop-style sorter. It's used for high-speed automated sortation of small, lightweight items, such as pharmaceuticals, books, and other small parcels. The chutes or cartons are located below the sorter, and when the product is in position, the doors swing open like a trap door to divert the product to the correct location. Bombay-sorter-demo.fsm
View full article
This article explores creating Fluid Tanks from a Basic FR and Process Flow. This turned out to be a lot more complicated than I thought. If you need fluid objects in FlexSim, use the standard fluid objects in the library or buy FloWorks. The approach in this article requires a lot of up-front work just to get started, and you probably want to spend that time configuring well-tested objects instead of cutting your own path. This article is directed at two audiences: Those who don't want to use the fluid library or FloWorks (they have elected the way of pain) Those who like learning from example models If you are still interested, read on! TankDemo_10.fsm Kinetic Tracked Variables Tracked Variables hold a number. As you change the value, the Tracked Variable tracks the min, max, average, etc. You can optionally track the history of the value over time (this history) or the amount of time spent at specific values (the profile). You can also listen to when a Tracked Variable changes. A special kind of Tracked Variable is a Kinetic Tracked Variable (KTV). A KTV lets you set a rate. The rate is the ratio of the change in value divided by the change in model time units. If you set a rate, the KTV records when you set the rate and the initial value. In this way, you can as a KTV for its value at any point and get the exact continuous value. KTVs are the heart of this model. You can use a KTV as a label value. Each tank has a label called "Level" that is a KTV that holds the level of the tank. They are also used to represent the progress of a transfer of fluid between tanks. Custom Draw The fluid tanks you see in the model are BasicFR objects. The shape of the object is set to a cylinder. The color of the object determines the color of the fluid. To draw the changing fluid level, the OnDraw trigger of the tanks use the Level label to determine the height of the fluid. Then the OnDraw trigger draws a cylinder covering the remainder of the tank. Because the draw code accesses the Level label's value, the cylinder will change as the value of the Level label changes. Tank: An Object Process Flow Most of the logic in this model is defined in an Object Process Flow called Tank. I used an Object Process Flow so that it would be easy to attach other objects to the flow to imbue them with fluid tank logic. In a way, it's like defining a programming class. When you attach an object, you create an instance of that class. The Tank flow defines behavior for a general fluid tank: A Tank can have fluid transferred in, out, or both A transfer indicates a source tank and a destination tank, and amount, and a rate. If the source tank is null, then the fluid is generated in the tank. If the destination tank is null, fluid is consumed in the tank. A tank can have as many active transfers as the user adds to it. There's not an accompanying concept of "pipes" in this setup. Each transfer changes the rate of the Level KTV for each tank. If the tank gets full, input transfers are paused until the the tank empties below a threshold (95% of its capacity). If the tank gets empty, output transfers are paused until the tank fills above a threshold (5% of its capacity). If the tank is stopped, both input and output transfers are paused until the tank is resumed. The Model Logic The model logic is contained in the process flow called Process Flow. It picks a random recipe from the Recipes table and uses that to create transfers into the Mixer tank. Once those transfers are complete, the Mixer tank empties itself. When it's completely empty, the tank produces an item. Then that process repeats. By using an Object Process Flow, the logic for "how to tanks work in general" is separated from "what are the tanks doing." Pros and Cons The main con is that you would need to implement this logic yourself rather than starting with an object. This includes finding and fixing bugs. I have found and fixed many bugs in this demo model, but I'm fairly confident there are more. It turns out creating an object is tricky. However, there are a few pros: Compared to the fluid library, there is no ticker. The fluid library relies on a ticker to handle changes in level. This approach uses KTVs instead. KTVs are newer than the fluid library. A ticker adds a frequent event to the model and loops through fluid objects checking for changes. It is possible that the approach in this article is more accurate and faster to run. It's also possible that it's slower, due to the number of process flow activities. Compared to FloWorks, there is no monetary cost. This may actually be a con as time spent developing logic is an expense. This approach will take 10x longer or worse to get right. Your future self will thank you for just buying it. You have full control over the behavior. If you don't like how something works, or you want to add additional logic, the logic is all available for you to edit. Again, this might be a con, because you have to fix bugs as you make them. Conclusion Overall, this demo model shows lots of FlexSim features working together. That is valuable in itself. As a replacement for fluid objects, this demo model isn't a great route, unless you have very specific needs. As I built this model, I realized that I was probably solving the same set of issues that the developers of the fluid library were solving. What I thought was going to be somewhat simple turned out more complicated. I still think this is doable, but I'd look at other options very carefully first.
View full article
In the attached model we use a Timetable and two MTBF/MTTR objects to define Schedule Loss, Availability Loss (breakdowns) and an element of Performance loss due to short stops (state Down). The processor sends 'bad' items to port 2 based on the send to percentage which account for QualityLosses. The processor's 'best' processing time per part (5 seconds) is stored as a label, while the processing time itself is a triangular distribution with the minumum as 5 seconds - so it also contributes to performance loss. When the Type of the item changes a setup time occurs which is the final contributor to performance loss. Two state profiles were added to the processor - one to track production time and another for availability. An object process flow on the processor detects production profile state changes (between On and off shift) and regular Flexsim state changes and determines the availability state that should prevail. A user command getOEEstat is used to access the values which it calculates on demand and stores in a label on the processor called statsMap. The syntax for this command is: getOEEstat(myMachine,"OEE") The list of stats: "ScheduleLoss","AvailabilityLoss","PerformanceLoss","QualityLoss","IdealProdTime","AvailabilityRatio", "QualityRatio","PerformanceRatio", "IdealProdTime", "RunTime", "OEE", "TEE". A group was used to indicate which objects have their OEE tracked, and a stats Collector reads the group members and adds rows at reset. Finally Performance Measures were added for the stats for processor 1. Processor_OEE_2.fsm 2023-08-22 Update: Added 'TEE' stat.
View full article
Playing Multiplayer Tag in FlexSim! TagServer.fsm TagClient.fsm This is a side project to show off some fun things FlexSim can do using sockets. Sockets are just one possible way that one instance of FlexSim can communicate with another. In its simplest sense, a socket is just a port number and an IP address that computers use to send information to each other. For example, whenever a computer visits a website, it uses sockets to create a connection to the webserver's IP address on port 80 (HTTP) or port 443 (HTTPS). All of this occurs within the framework of the TCP/IP network. Similarly, FlexSim can establish socket connections to communicate with another FlexSim as long as you know the IP address and choose a port for both instances to connect on. Establish Socket Connection In this game of tag, the model that acts as the server uses these commands to set up socket connection with the client: socketinit(); servercreatemain(8002); serveraccept(0); Note: All of the commands in this article can be found in the documentation The client then runs the following to connect to the server. The HostIP should be the IP address of the device running the server model (or 127.0.0.1 if running both client and server on the same machine). socketinit(); int a = clientcreate(); int a_con = clientconnect(a, HostIP, 8002); On both the client model and server model, these commands are in the OnRunStart trigger and the server creates 5 socket connections for each player. 4 for inputs (up, down, left, right) and 1 for passing in their player name from the client. The server model will freeze and wait for all sockets to be set up. For example, if there are 3 players, the server model will wait until 3 clients have successfully ran the client model and connected. After all clients have connected to the server, we can use clientsend() and serverreceive() to send information from client to server. An example of this is sending movement inputs from client to server. This is what that looks like: Client // MOVE if (iskeydown(87)){ clientsend(2, "up"); } if (iskeydown(83)){ clientsend(3, "down"); } if (iskeydown(65)){ clientsend(4, "left"); } if (iskeydown(68)){ clientsend(5, "right"); } Server string up1 = serverreceive(token.Rank * 5 - 3, NULL, 100, 1); string down1 = serverreceive(token.Rank * 5 - 2, NULL, 100, 1); string left1 = serverreceive(token.Rank * 5 - 1, NULL, 100, 1); string right1 = serverreceive(token.Rank * 5, NULL, 100, 1); // token.Rank is the number of the player (1,2,3 etc.) and is used to reference the right // socket for each player's inputs //reset coordinates te.X = 0; te.Y = 0; te.Stop = 0; if (up1 != ""){ // up W meaning the server received an input on this socket te.Y = 1; // this accounts for getting multiple messages from client like "upupup" } if (down1 != ""){ // down S te.Y = -1; // this label is later used in a travel activity } if (left1 != ""){ // left A te.X = -1; } if (right1 != ""){ // right D te.X = 1; } Note: In a typical game environment, the server is also sending information back to the clients, but in this example, all visuals and logic occur on the server. Game Logic All game logic is found on the server process flow. Setup and Order of Models To play Tag, follow these steps: Note: If there are certain firewalls or security groups on your network that doesn't allow traffic into FlexSim outside local networks you may be limited on who can connect to play tag 1. Open TagServer model and change NumPlayers parameter to the desired number of players 2. Open TagClient model and change both global variables (HostIP and EnterNameHere). Again, if everything is running on the same device, use 127.0.0.1 as the HostIP global variable value. 3. Reset and Run the TagServer model first 4. Reset and Run the TagClient model. You should get this output if connection was successful 5. Run clients one at a time for each player until all players have connected and the server will run automatically. This is the output on the server upon successful connection 6. Enjoy!
View full article
This model demonstrates a method of giving a task executor a set of tasks (or jobs) to accomplish in a specific sequence by referencing information stored in a global table. In this case, an operator needs to move boxes between queues in a very specific order. The global table defines the order of operations, with each row representing one job. The row contains necessary information about the job: the item to be moved and its destination. These jobs are to be carried out in order, row by row. First, Product1 will be moved to Queue1, then Product2 to Queue2, Product3 to Queue3, Product1 to QueueOut, and so on and so forth. Any modifications made to this table will directly modify the model’s behavior. The logic behind the process flow is relatively simple: a single token loops through a task sequence, with the operator completing a job with every loop. Assign Labels activities are used to get information about the job from the global table and attach that information to the token. The “Assign Labels: Get Table Data” activities reference the current row in the global table to access and store information about the current job (the item name and its destination). The “Assign Label: Item” activities use custom code to search through a group containing all the queues in the model until it finds the item matching the item name from the global table. The task sequence directs the operator to travel to the item’s queue, load the item, travel to the destination, and unload the item. The token’s labels contain the information that is needed for these activities. After this sequence is completed, the label “row” is incremented so that the next row in the table will be used for the following job. Although this demo model shows an operator moving boxes between queues, this method of acquiring data from global tables to define an order of operations can be used in a variety of other applications. GlobalTableDemo.fsm
View full article
This model, developed by @Jacob W2, demonstrates a method of using a kinetic tracked variable to measure process completion on a processor. This allows a processor to have a dynamic processing time, which isn’t possible using the standard processor logic. This is all handled though the use of an object process flow that easily allows other processors to be used in a similar fashion. Kinetic tracked variable.fsm This is the basic layout for the model: a source, a basic fixed resource, and a queue. A basic fixed resource is used in lieu of a processor because it has no innate logic that needs to be dealt with. This is the object flow layout. It’s fairly simple, but handles all of the logic we will need for the processor to work. The “Model Start Logic” container allows the first item to enter the processor. The “Item Control” section is the main feature of the model. A token is created whenever an item enters the processor. A process time label is created on the token, which sets when the process will be completed. This value should be set in model time units. For example, if a part takes one minute to complete and the model time is in units of seconds, the value should be 60. The Initialize Tracked Variable custom code activity is what actually creates the tracked variable that represents the processing progress. First, a treenode has to be created for the label, and then the tracked variable needs to be initialized with the correct type; in this case, a kinetic level tracked variable. A token is then created in the Variable Rate Loop. This token is used to set the processing rate. First, the token will set an initial rate for the tracked variable, and then after a random delay, it will set a new rate for the part processing. It continues to loop until released by its parent token. While the token for tracking the rate is looping, the token in the “Item Control” container is at the Wait for Event activity watching the tracked variable that we created. Once the tracked variable reaches the value of the process time label, the token releases the item from the processor, and then readies the processor for the next item through a custom code activity. The tracking rate token is then released to a delay and sink activity block, and the original token is then sent to a sink activity. When another item arrives, a new token is created in the “Item Control” container, and the process outlined above is repeated. This application of kinetic tracked variables can be extended to a more complex model that has varying process times based on the availability of operators. More specifically, this model allows operators to be added to the process dynamically when they are available, and the number of operators working on the process directly affects the processing time. KTV_Operators.fsm This model layout contains two sources, two basic fixed resources, and two queues in parallel. It also contains five operators, which are grouped according to their shift. A kinetic tracked variable is once again used to represent the processing time or completion rate of an item, and the rate changes dynamically based on the number of operators working on the job at a time. The logic is contained in an object process flow, which is attached to the two basic fixed resources. The first of the three main containers, “Create Tracked Variable and Wait,” creates the tracked variable for the processing time and waits for processing to be complete. The second container acquires the operators from a list based on the current shift. The last container controls operator movement and changes the processing rate. This logic allows operators to come and go during shifts. The rate can be changed dynamically. As illustrated with the above examples, kinetic tracked variables are a powerful and versatile tool, allowing a greater level of control and customization over your model.
View full article
ContinuousUtilization_4.fsm In the attached model, you'll find a Utilization vs Time chart driven by a Statistics Collector and a Process Flow. This article is an alternative approach to the chart shown here: https://answers.flexsim.com/content/kbentry/158947/utilization-vs-time-statistics-collector.html Thanks @Felix Möhlmann for putting that article together. The concept in this model is very similar, but this is a different approach. There are pros and cons to each method. This method offers some performance improvements at the cost of writing more FlexScript code. This method also doesn't handle Warmup time, where the other method does. The general idea in this model is to keep a history of state changes for the previous hour (or other time interval). The history adds new info as states continue to change and drops old info as it "expires" by being older that the time interval. I chose to use a Bundle (stored on a token label) for the history. Bundles are optimized to add data to the end. If a bundle is paged (they are by default), then they are also optimized to remove data from the beginning. [Begin Technical Discussion - TLDR; Bundles are fast at removing the first row and adding new rows] A paged bundle keeps blocks of memory (pages) for a certain number of rows. If a page fills up, it allocates another page. It keeps track of the location in the page for the next entry. Similarly, it keeps track of the location of the first entry on the start page. If you remove the first row, that start location is moved forward on the page. If the start location gets to the end of a page, that page is dropped and the start location moves to the next page. [End Technical Discussion] The Process Flow maintains the history table. Whenever the object's state changes, the Process Flow adds a new entry to the history table. This part is straightforward. The tricky part is properly "expiring" the data. To do this, a second set of tokens wait for the last of the oldest data to expire. When the oldest set of data should expire, the tokens remove any rows that are too old and then wait for the next oldest row. If there are no expired rows, the tokens wait for one "time interval" and then check again. This is because if there is no expired data, data can't expire for at least one time interval. In this way, the history table is always kept up to date. The Statistics Collector is configured to post-process the history table. It calculates the total utilization, including the state that the object is currently in, as well as the state being "phased out" from the history table. The Statistics Collector can do this calculation at any point in the model. So the sampling interval (called Resolution in the model) is independent from the time window.
View full article
Changing the packing method for task executers can be tricky and also can vary widely between types. In this article we'll explore the default packing method for each TE and how to alter it to suit the model's needs. Attached is a model which demonstrates the default packing method and the comparison. The model is also the best place to get premade code for the ASRS, the crane, and the Robot: TEStackingNew.fsm Default Stacking on Task Executers There are two types of default stacking methods on the TE's. AGV's, operators, forklifts, ASRS, and basicTE's all use a simple up and down stacking method: The other TE's (elevator, robot, and crane) don't separate the boxes at all so they all look like they're on top of each other (this is because they're mostly meant to only carry one item at a time): Changing the Stacking Pattern: AGV, Operator, & BasicTE: For these objects, changing the stacking method is relatively simple. Add 5 labels to the object: numx, numy, xshift, yshift, and zshift Add an OnLoad trigger in the properties panel Set the trigger by going to Visual->Set Location Type in this code current.xshift+item.size.x*((item.rank-1)%current.numx) -current.yshift+item.size.y*(Math.floor(((item.rank-1)/current.numx)%current.numy) - (current.numy-1)/2) current.zshift+item.size.z*Math.floor((item.rank-1)/current.numx/current.numy) The only thing that differs between these 3 objects are the label values. The labels "numx" and "numy" sets the number of objects in the x direction and how many in the y direction. For example a 3x3 grid on the AGV would set "numx" be 3 and "numy" be 3 xshift yshift zshift AGV 1 .5 .5 Operator .75 .29 1.10 BasicTE 0 0 .85 Elevator, Robot, & Crane: These objects are very similar to the TE's above. You'll follow the same steps above, except on the On Load trigger, choose to just paste custom code in the box. They have slightly different code (due to the different direction/ways of stacking) that you can copy from the model above. However, you will still add the labels to the TE's. You can set them using labels like these: xshift yshift zshift Elevator .7 1 .095 Robot .34 .35 .2 Crane .2 .5 -.25 Forklift: This is the easiest object to change because it is already built into the TE. To change the stacking pattern just add an OnEntry trigger and select Transporter Stacking Method, then just change the values to be what you would like. ASRS: For this one, you can change the stacking method by editing the model tree. The steps to change it are simple but specific: Open the model tree for the object by right clicking and selecting Explore Tree Find the behaviour node beneath the ASRS Node Add an node underneath called "eventfunctions" Beneath the node you just created add a node called "OnPreDraw" Paste in that node the code below (you can edit this to alter the stacking method how you would like): inheritcode(); TaskExecuter current = c; Object followingObj = first(current); double numx = 1; double numy = 4; double xshift = .95; double yshift = 1.45; double zshift = .1; while(objectexists(followingObj)){ double x = xshift+followingObj.size.x*((followingObj.rank-1)%numx); double y = -yshift+yloc(node(">visual/drawsurrogate/Lift/Slide", current))+followingObj.size.y*(Math.floor(((followingObj.rank-1)/numx)%numy) - (numy-1)/2); double z = zshift+followingObj.size.z*Math.floor((followingObj.rank-1)/numx/numy); double xFactor = 0.5; double yFactor = 0.5; double zFactor = 0; setloc(followingObj, x, y, z, xFactor, yFactor, zFactor); followingObj = next(followingObj); } Some things to keep in mind: Each TE is completely customizable by the user so the offset I considered to look right may not look right to you, the good thing is it's very changeable!
View full article
Introducing FlexSim Galaga! GALAGA.fsm This is an example of how a game could be made in FlexSim. Feel free to download the model and try it out (FlexSim Version is 23.1). Or if you are interested, see below for how the game works. How it works FlexSim Galaga uses Process Flow to perform all logic that occurs in the game. It also takes advantage of Global Variables to keep track of game data as the game is played. This allows one token to set a variable, for example WaveCounter, and that same value can be referenced anywhere in the model. Player Movement and Inputs I used the function iskeydown() for all player inputs. A token loops through a Custom Code activity that continuously checks for which keys are being pressed and makes the corresponding changes in the model. An example of this for player movement looks like this: if (iskeydown(65) && te.location.x > -90){ // left A te.location.x = te.location.x - 10; } if (iskeydown(68) && te.location.x < 90){ // right D te.location.x = te.location.x + 1 For shooting, similar code is written and assigns a label to the player. This label is used for the token to act differently depending on what the player has bought from the shop. The shop inputs are similar and check if the player has enough money for the selected item in the shop. (See the process flow activities 'Shoot Inputs' and 'Shop Inputs' for how this is done.) Wave Health and Speed As the game plays out, waves get more difficult and arrive faster. Each enemy ship has a certain health value and money value that the player earns when destroying it. These values come from the Global Table WaveHealth. After 50 waves, this table is repeated to spawn new waves. This can go on forever in Endless Mode or this table can be modified during the Model run by buying Extreme Mode. The game starts with Waves that last 20 seconds. To make the waves arrive faster over time, this code lowers the WaveDuration from 20 to 15 to 10 etc. every 12 waves. 5 seconds is the fastest wave time and doesn't get any faster. if (WaveCounter % 12 == 0){ if (WaveDuration > 5){ WaveDuration = WaveDuration - 5; } } Enemy Ships and "Collision" Logic When enemy ships are created, they are added to a Global Map called ColShipMap. The key is the column number the ship is in (1-20) and the value is an array of all ships in that column. This makes it easy to check if a shot has hit a ship. While shots are moving the token they are associated with constantly checks the distance between the shot and any ships in that column (thanks to ColShipMap). The column the shot belongs to is calculated based on the x location of where it is created. When the distance between the shot and a ship in that column is close enough, labels are changed on the token so the game knows which ship was hit and to decrease the health or destroy the object entirely. double x = token.shot.getLocation(1, 0, 0).x; // inverse of token.item.location.x = ((index - 1) * 10) - 95; int index = (x + 95) / 10 + 2; token.Index = index; Array spaceships = ColShipMap[index]; Vec3 shotPos = token.shot.as(Object).getLocation(0.5, 0.5, 0); for (int i = 1; i <= spaceships.length; i++){ double spaceshipY = spaceships.location.y; if (Math.fabs(spaceshipY - newY) < 2) { token.Hit = 1; token.target = spaceships; token.SpliceIndex = i; break; } } Display There are several Billboard objects in the model that toggle depending on the state of the game. (switch_hideshape() is used to make them appear like they are flashing) Feel free to look in the process flow for when these occur, but the overall purpose is to inform the player of what is happening in the game. Note: If you zoom out and want to re-center the model for an optimal display. Open properties and click on the view called 'Menu.' Then close the properties window. All other windows like the Toolbox are closed to make game visuals better. Feel free to look into the Process Flow or Model Triggers for anything I didn't mention. Enjoy!
View full article
Good practice to reduce variance when experimenting is to separate streams of things that might vary in the model so that the random sampling is independent. An example might be that you have a number of processors who are members of the same breakdown profile (MTBF/MTTR object) where the individual breakdowns are dependant on the state of the processor. If during one scenario a processor is used more than before then it may sample the duration and next breakdown earlier, and therefore change the sequence with the other machines sampling of breakdown times, increasing variance. This is because the default setting for the MTBF time fields are using 'getstream(current)' - which means a single stream for the MTBF object, shared across all members. You could try to change this in the MTBF by using 'getstream(involved)' where 'involved' refers to the breakdown member machine. This causes other problems since if you're sampling processing times using the machine's stream too, then the amount of items processed will again change the breakdown times samples. You may judge this to be acceptable, but in a ideal world you'd still want separate streams and may want multiple streams for setup, processing, breakdowns, or subsystem failures. One way to accomplish this is by changing the way getstream() works such that it can generate a stream for any value you pass to it. That might be an object, as the current getstream() accepts, or it could be the string name of the object or it's path. It could also be an array which then opens a number of possibilities: In a breakdown you could replace getstream(current) with getStream([current,involved])   //generates a unique stream number for the MTBF/machine pair* In an Object Process Flow you could replace getstream(activity) with: getStream([current,activity]) // generates a unique stream for the instance and activity pair and works for the general process flow too. For a processing time on a processor instead of getstream(current) you could use getstream([current,"Processing"]) and getstream([current,"Setup"]) to generate two seperate sampling streams. The attached library contains an auto-installing user command that overrides getstream() to provide this functionality. The stream values save with the model. getStream-byvariant3.fsl * This implementation does have some limitations since during an experiment it does not communicate back the master model when trying to create new streams. For this reason you'll want to try and have all possible streams set up before running an experiment or avoid the type of actions that dynamically create the requirement for new streams - so that might be keeping all possible fixed resources and task executers, and hiding/removing them from groups rather than destroying them as the OnSet options of the parameters table do currently. Alternatively if you consistently name the dynamically created instances then the MTBF stream expression could be: getStream([current, involved.name]) Update: I've edited this post and library to use getStream (capital 'S') since the override parameter (var thing) doesn't stay in place and eventually causes FlexScript build errors. So with the updated library you'll need to find/replace from the model tree 'getstream' with 'getStream'.
View full article
FloWorks 23.2.0 is now available (10 August 2023). This version of FloWorks is intended for use with FlexSim 2023 Update 2. If you are using FloWorks with FlexSim 2023 (LTS), please use FloWorks version 23.0.3 (LTS). If you are using FloWorks with FlexSim 2023 Update 1, please use FloWorks version 23.1.1. All versions can be found in the Downloads section of your FlexSim account on the 3rd party modules tab. Please do not hesitate to report any bugs, usability improvements and feature requests to support@talumis.com. About FloWorks FloWorks is a 3rd party module developed and maintained by Talumis BV ( talumis.com). It provides faster and more accurate modelling and calculation of fluid systems than the default FlexSim fluid library. It is especially useful within the oil, gas, and bulk industry both for production and supply chain optimization. This module requires a FloWorks license with active maintenance. For any questions, please email support@talumis.com. Release notes View the full release notes in the online documentation. FloWorks 23.2.0 Updated FloWorks for FlexSim 2023 Update 2. FloWorks 23.1.1 All bug fixes in FloWorks 23.0.3 below. FloWorks 23.1.0 (12 April 2023) Segmented Pipe: removed virtual content and fixed properties Changed default content of flow tanks and mixer from 50,000 to 1,000. All bug fixes in FloWorks 23.0.2 below. FloWorks 23.0.3 Bug fix: Module dependency shows invalid module version number All bug fixes in FloWorks 22.0.7 below. FloWorks 23.0.2 (12 April 2023) Moved flow control to toolbox Support time series, level triggered event and workability in Tools.create and Tools.get Added context menu option to disable time series in toolbox Added dot syntax to create flow controls, get the default flow control, and request recalculation of a flow control's network All bug fixes in FloWorks 22.0.6 below. FloWorks 22.0.7 Bug fix: Multi-product FlowTank blocks simulation when layer is small but nonzero, due to rounding error. FloWorks 22.0.6 (12 April 2023) Bug fix: Loading Arm processing time defaults to 10. Bug fix: fixed length³ to volume conversion for models in non-metric units Bug fix: fixed error message "FlowConveyor content changes are not allowed" Bug fix: flow conveyor density calculation was incorrect with non-metric units Bug fix: icon grid was not filtering correctly when A-connecting Bug fix: flow conveyor properties panel fixed Bug fix: added vertices editor to polygon flow tank quick properties Bug fix: after the first vessel, vessels were positioned incorrectly in the loading point Bug fix: quick properties now shows "Max. rate" for Loading Arm and "Max. inflow" for Flow Sink. Bug fix: removed duplicate Input / Output / Content change events from event selection list Bug fix: corrected input / output in on change event at end of warmup Bug fix: on empty and on full event would sometimes fire twice
View full article
Hi everyone, recently @Arun Kr posted this idea to add a utilization vs. time chart to the available chart types in FlexSim. I had previously build a relatively easy to set up Statistics Collector for use in our models. I have since cleaned up the design a bit and thought to post it here, since this seems to be a commonly desired feature. utilization_vs_time_collector_24_0_fm.fsm All necessary setup is done through labels of the collector. The first three are actually identical to labels found on the default Statistics Collector behind a state bar chart. - Objects should point at a group that contains all objects the collector should track the utilization for. - StateTable is a reference to the state table that will be used to determine which state counts as 'utilized'. - StateProfile is the rank of the state profile that should be read on the linked objects (0 for default state profile). - MeasureInterval is the time frame (in model units) over which the collector will take the average of the utilization. - NumSubIntervals determines how often that measurement is actually taken. In the example image above (and the attached model) the collector measures the average utilization over the last 3600s 12 times within that interval of every 300s. Each meausurement still denotes the utilization over the complete "MeasureInterval". The graph on the left takes a measurement every 5 minutes, the one on the right every 60 minutes. Each point on both graphs represents the average utilization over the previous hour from that time point in time. - StoredTimeMap is used to allow the collector to correctly function past a warmup time, by storing the total utilized value of each object up to that point. This should no be manually changed. Since this last label has to be automatically reset, remember to save any changes made to the other labels by hitting "Apply". The collector works by keeping an array of 'total utilized time' value for each object as row labels. Whenever a measurement is taken, the current value is added to the array and the oldest one is discarded. The difference between the newest and oldest value is used to calculate the average utilization over the measurement interval. The "NumSubIntervals" label essentially just controls how many entries are kept in that array. To copy the collector into another model you can create a fresh collector in the target model. Then copy the node of this collector from the tree of the attached model and paste it over the node of the fresh collector. I hope this can help to speed up the modeling process for some people (at least until a chart like this is hopefully implemented in FlexSim) or serve as inspiration for how one can use the Statistics Collector. I might update the post with a user library version if I get to creating it (and if there is demand for it). Best regards Felix Edit: Added user library with the collector as a dragable icon to the attached files. Edit2: I noticed a bug while using the collector. Having the tracked objects enter states that are marked as "excluded" in the state table would lead to incorrect utilization values (possibly even below 0 or above 100%). Replaced the library with an updated version that fixes this. Edit3: I fixed another bug that resulted in a wrong utilization value for the first measurement after the warmup time if the object spent time in an excluded state prior to the warmup. utilization-vs-time-collector-library-20250212.fsl
View full article
FlexSim 2023 Update 1 Beta is now available. (Updated March 14) FlexSim 23.1.0 Release Notes To get the beta, log in to your account at https://account.flexsim.com, then go to the Downloads section, and click on More Versions. It will be at the top of the list. The More Versions button does not appear when logged in as a guest account. The beta is available only to licensed accounts and accounts that have a license shared with them. Learn more about downloading the best version of FlexSim for your license here. If you have bug reports or other feedback on the software, please email dev@flexsim.com or create a new idea in the Bug Report space or Development space.
View full article
FlexSim 2023 Update 2 is now available for download. You can view the Release Notes in the online user manual. FlexSim 23.2.0 Release Notes For more in-depth discussion of the new features, check out the official software release page: FlexSim 2023 Update 2: NVIDIA Omniverse, USD, Restricted Models, and more If you have bug reports or other feedback on the software, please email dev@flexsim.com or create a new idea in the Bug Report space or Development space.
View full article
Attached is an example simulation of a rail hump yard. Trains in this hump yard are processed in three stages: Arrival - A train engine delivers an arriving train into the arrival area of the yard and then leaves Classification - The shunt engine takes trains from the arrival area to the hump. From there the train is uncoupled into sets of cars for classification, and each set of cars 'falls' to its designated departure train and couples to it. Departure - Once a train has been composed, it is transferred to the departure area, where it waits a random time until departure. I've tried to keep the logic as simple as possible so you can understand the process flow. I've implemented no traffic control between train engines/shunt engine, so they will occasionally run over each other. However, I have used AGV routing constraints to dynamically block off sections of track that are filled by trains, so the engines will move around them. HumpYardSample.fsm
View full article
Pools of features, organized by version A license is actually a set of features. Different license types are made up of different sets of features. This table shows the various features that make up different FlexSim license types (Enterprise, Educational, Runtime, etc): Each license is set to a given version, and each of the contained features is at that version. With that background, once a license is activated on a license server, its features are added to a pool of license features at a given version. So for instance, let's say you have the following 2 licenses activated to your license server: Enterprise 23.0 - 2 seats Runtime 22.2 - 1 seat Once these licenses are activated to your license server, the server actually has no idea it has 2 Enterprise and 1 Runtime. It sees only the following pools of license features: dragdropconnect 2 seats [23.0: 2 seats] (the Runtime license didn't provide one of these) compile 3 seats [23.0: 2 seats, 22.2: 1 seat] xmlsaveload 2 seats [23.0: 2 seats] (the Runtime license didn't provide one of these) ... commercialuse 3 seats [23.0: 2 seats, 22.2: 1 seat] createobjects 2 seats [23.0: 2 seats] (the Runtime license didn't provide one of these) modeltree 1 seat [22.2: 1 seat] (not a feature of Enterprise licenses) FlexSim software is feature-greedy By default, when a FlexSim install contacts a license server for a license, it will try to get 1 of every feature at the software's version or later. In this way we say that FlexSim software is "greedy". For instance, under the above scenario, when you start FlexSim 22.2, the software will default to pull the following features: dragdropconnect compile xmlsaveload stochastics consolescript nomodellimit entiretree commercialuse createobjects modeltree The italicized features are Enterprise-specific. The bold feature is Runtime specific. The software just tries to get one of each feature, so now its feature set is a hybrid of Enterprise+Runtime. In the software it reports a "Custom" license: If a 2nd person were to open the 22.2 software, they would get that same set, minus the one modeltree feature from Runtime, essentially giving them the 2nd Enterprise seat. If a 3rd person tried to open the software in version 22.2, they would get a slightly hobbled version of a Runtime license, without the modeltree feature. Any in-software features that relied on that being present would be blocked. Only checkout features for license type The software includes an option to limit what features it will ask for: In this way you can ensure that the right people are getting the right feature set.
View full article
FlexSim 2019 Update 2 is available. If you have bug reports or other feedback on the software, please email dev@flexsim.com or create a new idea in the Development space. Release Notes Added a Storage System object to improve warehouse modeling. Improved the Rack object. Added more types of racks to the library. Updated the Sketchup SDK to load newer skp file versions. Added a ray tracing render mode. Added a Color Palette tool. Improved the color picker popup. Updated dashboard charts to use color palettes. Added options for showing column names to charts. Added options for showing multiple Y-axes on timeplots and histograms. Updated box plot options for categorizing and coloring data points. Added an option for reordering axis categories on timeplots, histograms, and box plots. Added additional line styles and visualization options to timeplots. Added Table methods for using indexed columns. Added a resetvalues attribute. Added Object getVariable() and setVariable() methods. Updated the SQL IS keyword to work with expressions. Improved performance of SQL queries on tables with indexed columns. Added an array parameter to Math.max() and Math.min(). Added a binary bundle field type. Added a button to Model Settings to export embedded media. Changed the default Person flowitem into separate Man and Woman flowitems. Added options for using the object's color to Person Visuals. Added a changepersonvisuals() command and pick options. Updated the Model Background properties window. Added a right-click option to show/hide model backgrounds from the Toolbox. Added pick options for using lists to the Pick Operator field on the processor. Updated various pick options to use generic label references. Added options to the List for special handling of SELECT values. Added Text options to Quick Properties. Added mean and standard deviation to the statistical distribution popup. Added icons to various pick option menus. Improved the Copy Variable function of Edit Selected Objects. Added the location coordinate system button to the General tab of object properties. Removed the Build menu; its options can still be added to the user toolbar. Changed the run speed slider to behave as a ratio of real time instead of model units per second. Changed the color of the selection box in the 3D view to be more visible. Fixed a bug with deleting array data from a node. Fixed an exception with cloning a table without rows. Fixed a bug with applying experimenter triggers. Backwards Compatibility Note: the following changes may slightly change the way updated models behave. Improved the rendering performance of some imported shapes by optimizing their meshes by material. Some mesh customizations may apply differently for some shapes. Charts now ignore null values instead of trying to categorize or plot them. Changed the FlexScript + operator to concatenate numbers with strings. Fixed some FlexScript math operations with numbers and null variants. Fixed an issue with FlexScript downcasting not always type checking properly in certain expressions. For example, "Object obj = param(1);" will now correctly throw an exception if param(1) is a treenode without Object data. Updated the Rack's place offset to be the actual location that the item will be placed. Fixed a bug with labels added during events at time 0 not being deleted on reset. Fixed a bug in Color.random(). Process Flow Changed the scroll wheel to zoom instead of panning the view up and down. Removed the extra outline around stacked blocks. Changed the default option for Release Batch on max wait timer and max idle timer to have Failed unchecked. Organized the ProcessFlow list on the toolbar based on the Toolbox folder structure. Changed Event-Triggered Source and Wait for Event panels to be collapsible. Added an edit field for the number of arrivals on the Scheduled Source. Added a getlastacquiredresource() command. Changed the color of the selection box to match the 3D view's. Backwards Compatibility Note: the following changes may slightly change the way updated models behave. Updated Move Object's Preserve Global Position checkbox to preserve location and rotation. People Added an experiment variable for changing the number of objects in a People group. Added a visual indicator to show when objects are acquired by the shift schedule. Added A* dividers to waiting lines and other objects. Added a custom window for People Down Behaviors. Added a Keep Person on Transport checkbox to the Transport Person activity. Changed the default Arrivals activities to randomize the created person's visuals. Added state history tables. Adjusted the Staff's walk back to reset position so that it is preempted if acquired by something else. Updated activity subflows so travelers synchronize their speeds when traveling together. Moved copying token labels to the created person to after the people settings labels are added so values aren't overwritten. Fixed a bug with getting the value of certain types of labels defined in People Settings. Walls are now created at the current Grid Z height. Conveyor Improved naming of various objects. Fixed a bug with the height of decision points when created with certain model units. Added a chain texture to conveyor belt visual options. AGV Updated curved path quick properties to be able to specify start and end points. Improved naming of control points and areas. A* Added the A* Navigator to the Toolbox. Updated the A* Navigator Properties window.
View full article
This is a demo model for the new warehouse functionality found in version 2019 Update 2: warehouse-demo-model.fsm The basic premise of this model is that items of a particular type come in, and must be placed in slots for that type. Orders also come in, requiring items of a particular type, that must be retrieved from storage. The model is meant to be a general concept model. It demonstrates the use of many of the new features in 19.2, and embodies some high-level "how-to's" of warehousing that are discussed in the user manual. Most logic for the model is implemented in a process flow. The process flow logic is separated into three main categories, namely initial inventory, inbound, and outbound processes. Further, the outbound process demonstrates both random-based order generation as well as history-based order generation. Initial Inventory The model includes a Global Table of Initial Inventory. The process flow's initial inventory section reads this table, and then creates items and places them into slots based on that initial inventory. This logic relies on the Address Scheme defined in the Storage System object, and uses direct addressing to get a slot using Storage.system.getSlot(). Inbound I use the process flow to assign a slot to each incoming item. I use an Assign Labels activity called Find Slot to do this. This uses a pick list option that wraps a call to Storage.system.findSlot(). The query matches the Type of the item with the Type of the slot, and also ensures that the target slot has space to fit the incoming item. The query also randomizes the order. Randomizing the order would likely not be necessary in most situations, but it makes the demo look nice. If the Find Slot activity properly finds a slot to store the item, then I go ahead and assign the the item to that slot, and have an operator place it in the rack. Outbound I also use the process flow to generate orders, and to reserve items in the storage system for those orders. In most warehouse simulations, order generation can be driven in two ways. First, you can use random probability distributions to generate orders based on general throughput metrics. Second, order generation can be based on historical data. This model gives an example of each method. In the random method, orders are generated randomly every ~30 seconds. Each order includes a number of SKU line items (again, random) and each line item includes a quantity of that SKU (again, random). Order tokens spawn line item tokens, which in turn spawn tokens associated with individual picks (the Fill Out Individual Picks process). For each pick, the token finds an item in storage that matches the target SKU. This is an Assign Labels activity (Find Item by SKU) with a pick option that wraps a call to Storage.system.findItem(). It finds an item that matches the required type, again using a query. Once the item is found, it makes reserves the item as "outbound" by assigning the Storage.Item.assignedSlot property to null (Set to Outbound activity). This ensures that no other process will find that same item for picking. The history-based order generation process uses much of the same functionality as the random-based, but it instead reads an "OrderHistory" table to determine when orders are started and what those orders contain. The OrderHistory table represents a simplified format for what you would likely see in a standard orders table. First, the process flow creates a transformed table that aggregates each order into a single row (this could technically be done as post-import code, but I do it in the process flow for visibility). Then the process flow loops through that transformed table, waiting for the start time of each order, then spawning that order. Custom Rack Visualization I have also customized the visualization of the racks. I have added a text to the front (and bottom on the floor) of a rack slot that will show the address of that slot. Further I've given the text a background that is color-coded to the SKU that that slot is designated to store. This was all done through the Storage System's Visualizations tab. I customized the Rack visualization.
View full article
FlexSim 2017 Update 1 or later is compatible with Windows Mixed Reality headsets and controllers, such as the Samsung HMD Odyssey. To configure such devices, you use the Windows Mixed Reality Portal app in Windows 10 and SteamVR. FlexSim uses SteamVR's OpenVR API to communicate with these devices. Microsoft provides instructions for how to Play SteamVR games in Windows Mixed Reality. I will provide additional summarized steps below: 1. Configure your hardware with the Windows Mixed Reality Portal app in Windows 10. 2. Install Steam. 3. Within Steam, install SteamVR and Windows Mixed Reality for SteamVR. 4. If you are able to do the SteamVR tutorial with your headset, then you should now be able to use VR Mode in FlexSim: Note: While using a Windows Mixed Reality HMD (head-mounted display), pressing the Windows button on the controllers may take you to the Windows Mixed Reality Portal Home: If you end up here, you can take the headset off and put it back on to transfer back to FlexSim: Also, if you appear to be floating far above the ground, you can recenter the headset pose with this script, which you could map to a Custom Button on your toolbar: applicationcommand("recenteroculusrift");
View full article
Top Contributors