FlexSim Knowledge Base
Announcements, articles, and guides to help you take your simulations to the next level.
Sort by:
In the attached model we use a Timetable and two MTBF/MTTR objects to define Schedule Loss, Availability Loss (breakdowns) and an element of Performance loss due to short stops (state Down). The processor sends 'bad' items to port 2 based on the send to percentage which account for QualityLosses. The processor's 'best' processing time per part (5 seconds) is stored as a label, while the processing time itself is a triangular distribution with the minumum as 5 seconds - so it also contributes to performance loss. When the Type of the item changes a setup time occurs which is the final contributor to performance loss. Two state profiles were added to the processor - one to track production time and another for availability. An object process flow on the processor detects production profile state changes (between On and off shift) and regular Flexsim state changes and determines the availability state that should prevail. A user command getOEEstat is used to access the values which it calculates on demand and stores in a label on the processor called statsMap. The syntax for this command is: getOEEstat(myMachine,"OEE") The list of stats: "ScheduleLoss","AvailabilityLoss","PerformanceLoss","QualityLoss","IdealProdTime","AvailabilityRatio", "QualityRatio","PerformanceRatio", "IdealProdTime", "RunTime", "OEE", "TEE". A group was used to indicate which objects have their OEE tracked, and a stats Collector reads the group members and adds rows at reset. Finally Performance Measures were added for the stats for processor 1. Processor_OEE_2.fsm 2023-08-22 Update: Added 'TEE' stat.
View full article
Playing Multiplayer Tag in FlexSim! TagServer.fsm TagClient.fsm This is a side project to show off some fun things FlexSim can do using sockets. Sockets are just one possible way that one instance of FlexSim can communicate with another. In its simplest sense, a socket is just a port number and an IP address that computers use to send information to each other. For example, whenever a computer visits a website, it uses sockets to create a connection to the webserver's IP address on port 80 (HTTP) or port 443 (HTTPS). All of this occurs within the framework of the TCP/IP network. Similarly, FlexSim can establish socket connections to communicate with another FlexSim as long as you know the IP address and choose a port for both instances to connect on. Establish Socket Connection In this game of tag, the model that acts as the server uses these commands to set up socket connection with the client: socketinit(); servercreatemain(8002); serveraccept(0); Note: All of the commands in this article can be found in the documentation The client then runs the following to connect to the server. The HostIP should be the IP address of the device running the server model (or 127.0.0.1 if running both client and server on the same machine). socketinit(); int a = clientcreate(); int a_con = clientconnect(a, HostIP, 8002); On both the client model and server model, these commands are in the OnRunStart trigger and the server creates 5 socket connections for each player. 4 for inputs (up, down, left, right) and 1 for passing in their player name from the client. The server model will freeze and wait for all sockets to be set up. For example, if there are 3 players, the server model will wait until 3 clients have successfully ran the client model and connected. After all clients have connected to the server, we can use clientsend() and serverreceive() to send information from client to server. An example of this is sending movement inputs from client to server. This is what that looks like: Client // MOVE if (iskeydown(87)){ clientsend(2, "up"); } if (iskeydown(83)){ clientsend(3, "down"); } if (iskeydown(65)){ clientsend(4, "left"); } if (iskeydown(68)){ clientsend(5, "right"); } Server string up1 = serverreceive(token.Rank * 5 - 3, NULL, 100, 1); string down1 = serverreceive(token.Rank * 5 - 2, NULL, 100, 1); string left1 = serverreceive(token.Rank * 5 - 1, NULL, 100, 1); string right1 = serverreceive(token.Rank * 5, NULL, 100, 1); // token.Rank is the number of the player (1,2,3 etc.) and is used to reference the right // socket for each player's inputs //reset coordinates te.X = 0; te.Y = 0; te.Stop = 0; if (up1 != ""){ // up W meaning the server received an input on this socket te.Y = 1; // this accounts for getting multiple messages from client like "upupup" } if (down1 != ""){ // down S te.Y = -1; // this label is later used in a travel activity } if (left1 != ""){ // left A te.X = -1; } if (right1 != ""){ // right D te.X = 1; } Note: In a typical game environment, the server is also sending information back to the clients, but in this example, all visuals and logic occur on the server. Game Logic All game logic is found on the server process flow. Setup and Order of Models To play Tag, follow these steps: Note: If there are certain firewalls or security groups on your network that doesn't allow traffic into FlexSim outside local networks you may be limited on who can connect to play tag 1. Open TagServer model and change NumPlayers parameter to the desired number of players 2. Open TagClient model and change both global variables (HostIP and EnterNameHere). Again, if everything is running on the same device, use 127.0.0.1 as the HostIP global variable value. 3. Reset and Run the TagServer model first 4. Reset and Run the TagClient model. You should get this output if connection was successful 5. Run clients one at a time for each player until all players have connected and the server will run automatically. This is the output on the server upon successful connection 6. Enjoy!
View full article
This model demonstrates a method of giving a task executor a set of tasks (or jobs) to accomplish in a specific sequence by referencing information stored in a global table. In this case, an operator needs to move boxes between queues in a very specific order. The global table defines the order of operations, with each row representing one job. The row contains necessary information about the job: the item to be moved and its destination. These jobs are to be carried out in order, row by row. First, Product1 will be moved to Queue1, then Product2 to Queue2, Product3 to Queue3, Product1 to QueueOut, and so on and so forth. Any modifications made to this table will directly modify the model’s behavior. The logic behind the process flow is relatively simple: a single token loops through a task sequence, with the operator completing a job with every loop. Assign Labels activities are used to get information about the job from the global table and attach that information to the token. The “Assign Labels: Get Table Data” activities reference the current row in the global table to access and store information about the current job (the item name and its destination). The “Assign Label: Item” activities use custom code to search through a group containing all the queues in the model until it finds the item matching the item name from the global table. The task sequence directs the operator to travel to the item’s queue, load the item, travel to the destination, and unload the item. The token’s labels contain the information that is needed for these activities. After this sequence is completed, the label “row” is incremented so that the next row in the table will be used for the following job. Although this demo model shows an operator moving boxes between queues, this method of acquiring data from global tables to define an order of operations can be used in a variety of other applications. GlobalTableDemo.fsm
View full article
This model, developed by @Jacob W2, demonstrates a method of using a kinetic tracked variable to measure process completion on a processor. This allows a processor to have a dynamic processing time, which isn’t possible using the standard processor logic. This is all handled though the use of an object process flow that easily allows other processors to be used in a similar fashion. Kinetic tracked variable.fsm This is the basic layout for the model: a source, a basic fixed resource, and a queue. A basic fixed resource is used in lieu of a processor because it has no innate logic that needs to be dealt with. This is the object flow layout. It’s fairly simple, but handles all of the logic we will need for the processor to work. The “Model Start Logic” container allows the first item to enter the processor. The “Item Control” section is the main feature of the model. A token is created whenever an item enters the processor. A process time label is created on the token, which sets when the process will be completed. This value should be set in model time units. For example, if a part takes one minute to complete and the model time is in units of seconds, the value should be 60. The Initialize Tracked Variable custom code activity is what actually creates the tracked variable that represents the processing progress. First, a treenode has to be created for the label, and then the tracked variable needs to be initialized with the correct type; in this case, a kinetic level tracked variable. A token is then created in the Variable Rate Loop. This token is used to set the processing rate. First, the token will set an initial rate for the tracked variable, and then after a random delay, it will set a new rate for the part processing. It continues to loop until released by its parent token. While the token for tracking the rate is looping, the token in the “Item Control” container is at the Wait for Event activity watching the tracked variable that we created. Once the tracked variable reaches the value of the process time label, the token releases the item from the processor, and then readies the processor for the next item through a custom code activity. The tracking rate token is then released to a delay and sink activity block, and the original token is then sent to a sink activity. When another item arrives, a new token is created in the “Item Control” container, and the process outlined above is repeated. This application of kinetic tracked variables can be extended to a more complex model that has varying process times based on the availability of operators. More specifically, this model allows operators to be added to the process dynamically when they are available, and the number of operators working on the process directly affects the processing time. KTV_Operators.fsm This model layout contains two sources, two basic fixed resources, and two queues in parallel. It also contains five operators, which are grouped according to their shift. A kinetic tracked variable is once again used to represent the processing time or completion rate of an item, and the rate changes dynamically based on the number of operators working on the job at a time. The logic is contained in an object process flow, which is attached to the two basic fixed resources. The first of the three main containers, “Create Tracked Variable and Wait,” creates the tracked variable for the processing time and waits for processing to be complete. The second container acquires the operators from a list based on the current shift. The last container controls operator movement and changes the processing rate. This logic allows operators to come and go during shifts. The rate can be changed dynamically. As illustrated with the above examples, kinetic tracked variables are a powerful and versatile tool, allowing a greater level of control and customization over your model.
View full article
Changing the packing method for task executers can be tricky and also can vary widely between types. In this article we'll explore the default packing method for each TE and how to alter it to suit the model's needs. Attached is a model which demonstrates the default packing method and the comparison. The model is also the best place to get premade code for the ASRS, the crane, and the Robot: TEStackingNew.fsm Default Stacking on Task Executers There are two types of default stacking methods on the TE's. AGV's, operators, forklifts, ASRS, and basicTE's all use a simple up and down stacking method: The other TE's (elevator, robot, and crane) don't separate the boxes at all so they all look like they're on top of each other (this is because they're mostly meant to only carry one item at a time): Changing the Stacking Pattern: AGV, Operator, & BasicTE: For these objects, changing the stacking method is relatively simple. Add 5 labels to the object: numx, numy, xshift, yshift, and zshift Add an OnLoad trigger in the properties panel Set the trigger by going to Visual->Set Location Type in this code current.xshift+item.size.x*((item.rank-1)%current.numx) -current.yshift+item.size.y*(Math.floor(((item.rank-1)/current.numx)%current.numy) - (current.numy-1)/2) current.zshift+item.size.z*Math.floor((item.rank-1)/current.numx/current.numy) The only thing that differs between these 3 objects are the label values. The labels "numx" and "numy" sets the number of objects in the x direction and how many in the y direction. For example a 3x3 grid on the AGV would set "numx" be 3 and "numy" be 3 xshift yshift zshift AGV 1 .5 .5 Operator .75 .29 1.10 BasicTE 0 0 .85 Elevator, Robot, & Crane: These objects are very similar to the TE's above. You'll follow the same steps above, except on the On Load trigger, choose to just paste custom code in the box. They have slightly different code (due to the different direction/ways of stacking) that you can copy from the model above. However, you will still add the labels to the TE's. You can set them using labels like these: xshift yshift zshift Elevator .7 1 .095 Robot .34 .35 .2 Crane .2 .5 -.25 Forklift: This is the easiest object to change because it is already built into the TE. To change the stacking pattern just add an OnEntry trigger and select Transporter Stacking Method, then just change the values to be what you would like. ASRS: For this one, you can change the stacking method by editing the model tree. The steps to change it are simple but specific: Open the model tree for the object by right clicking and selecting Explore Tree Find the behaviour node beneath the ASRS Node Add an node underneath called "eventfunctions" Beneath the node you just created add a node called "OnPreDraw" Paste in that node the code below (you can edit this to alter the stacking method how you would like): inheritcode(); TaskExecuter current = c; Object followingObj = first(current); double numx = 1; double numy = 4; double xshift = .95; double yshift = 1.45; double zshift = .1; while(objectexists(followingObj)){ double x = xshift+followingObj.size.x*((followingObj.rank-1)%numx); double y = -yshift+yloc(node(">visual/drawsurrogate/Lift/Slide", current))+followingObj.size.y*(Math.floor(((followingObj.rank-1)/numx)%numy) - (numy-1)/2); double z = zshift+followingObj.size.z*Math.floor((followingObj.rank-1)/numx/numy); double xFactor = 0.5; double yFactor = 0.5; double zFactor = 0; setloc(followingObj, x, y, z, xFactor, yFactor, zFactor); followingObj = next(followingObj); } Some things to keep in mind: Each TE is completely customizable by the user so the offset I considered to look right may not look right to you, the good thing is it's very changeable!
View full article
Introducing FlexSim Galaga! GALAGA.fsm This is an example of how a game could be made in FlexSim. Feel free to download the model and try it out (FlexSim Version is 23.1). Or if you are interested, see below for how the game works. How it works FlexSim Galaga uses Process Flow to perform all logic that occurs in the game. It also takes advantage of Global Variables to keep track of game data as the game is played. This allows one token to set a variable, for example WaveCounter, and that same value can be referenced anywhere in the model. Player Movement and Inputs I used the function iskeydown() for all player inputs. A token loops through a Custom Code activity that continuously checks for which keys are being pressed and makes the corresponding changes in the model. An example of this for player movement looks like this: if (iskeydown(65) && te.location.x > -90){ // left A te.location.x = te.location.x - 10; } if (iskeydown(68) && te.location.x < 90){ // right D te.location.x = te.location.x + 1 For shooting, similar code is written and assigns a label to the player. This label is used for the token to act differently depending on what the player has bought from the shop. The shop inputs are similar and check if the player has enough money for the selected item in the shop. (See the process flow activities 'Shoot Inputs' and 'Shop Inputs' for how this is done.) Wave Health and Speed As the game plays out, waves get more difficult and arrive faster. Each enemy ship has a certain health value and money value that the player earns when destroying it. These values come from the Global Table WaveHealth. After 50 waves, this table is repeated to spawn new waves. This can go on forever in Endless Mode or this table can be modified during the Model run by buying Extreme Mode. The game starts with Waves that last 20 seconds. To make the waves arrive faster over time, this code lowers the WaveDuration from 20 to 15 to 10 etc. every 12 waves. 5 seconds is the fastest wave time and doesn't get any faster. if (WaveCounter % 12 == 0){ if (WaveDuration > 5){ WaveDuration = WaveDuration - 5; } } Enemy Ships and "Collision" Logic When enemy ships are created, they are added to a Global Map called ColShipMap. The key is the column number the ship is in (1-20) and the value is an array of all ships in that column. This makes it easy to check if a shot has hit a ship. While shots are moving the token they are associated with constantly checks the distance between the shot and any ships in that column (thanks to ColShipMap). The column the shot belongs to is calculated based on the x location of where it is created. When the distance between the shot and a ship in that column is close enough, labels are changed on the token so the game knows which ship was hit and to decrease the health or destroy the object entirely. double x = token.shot.getLocation(1, 0, 0).x; // inverse of token.item.location.x = ((index - 1) * 10) - 95; int index = (x + 95) / 10 + 2; token.Index = index; Array spaceships = ColShipMap[index]; Vec3 shotPos = token.shot.as(Object).getLocation(0.5, 0.5, 0); for (int i = 1; i <= spaceships.length; i++){ double spaceshipY = spaceships.location.y; if (Math.fabs(spaceshipY - newY) < 2) { token.Hit = 1; token.target = spaceships; token.SpliceIndex = i; break; } } Display There are several Billboard objects in the model that toggle depending on the state of the game. (switch_hideshape() is used to make them appear like they are flashing) Feel free to look in the process flow for when these occur, but the overall purpose is to inform the player of what is happening in the game. Note: If you zoom out and want to re-center the model for an optimal display. Open properties and click on the view called 'Menu.' Then close the properties window. All other windows like the Toolbox are closed to make game visuals better. Feel free to look into the Process Flow or Model Triggers for anything I didn't mention. Enjoy!
View full article
Good practice to reduce variance when experimenting is to separate streams of things that might vary in the model so that the random sampling is independent. An example might be that you have a number of processors who are members of the same breakdown profile (MTBF/MTTR object) where the individual breakdowns are dependant on the state of the processor. If during one scenario a processor is used more than before then it may sample the duration and next breakdown earlier, and therefore change the sequence with the other machines sampling of breakdown times, increasing variance. This is because the default setting for the MTBF time fields are using 'getstream(current)' - which means a single stream for the MTBF object, shared across all members. You could try to change this in the MTBF by using 'getstream(involved)' where 'involved' refers to the breakdown member machine. This causes other problems since if you're sampling processing times using the machine's stream too, then the amount of items processed will again change the breakdown times samples. You may judge this to be acceptable, but in a ideal world you'd still want separate streams and may want multiple streams for setup, processing, breakdowns, or subsystem failures. One way to accomplish this is by changing the way getstream() works such that it can generate a stream for any value you pass to it. That might be an object, as the current getstream() accepts, or it could be the string name of the object or it's path. It could also be an array which then opens a number of possibilities: In a breakdown you could replace getstream(current) with getStream([current,involved])   //generates a unique stream number for the MTBF/machine pair* In an Object Process Flow you could replace getstream(activity) with: getStream([current,activity]) // generates a unique stream for the instance and activity pair and works for the general process flow too. For a processing time on a processor instead of getstream(current) you could use getstream([current,"Processing"]) and getstream([current,"Setup"]) to generate two seperate sampling streams. The attached library contains an auto-installing user command that overrides getstream() to provide this functionality. The stream values save with the model. getStream-byvariant3.fsl * This implementation does have some limitations since during an experiment it does not communicate back the master model when trying to create new streams. For this reason you'll want to try and have all possible streams set up before running an experiment or avoid the type of actions that dynamically create the requirement for new streams - so that might be keeping all possible fixed resources and task executers, and hiding/removing them from groups rather than destroying them as the OnSet options of the parameters table do currently. Alternatively if you consistently name the dynamically created instances then the MTBF stream expression could be: getStream([current, involved.name]) Update: I've edited this post and library to use getStream (capital 'S') since the override parameter (var thing) doesn't stay in place and eventually causes FlexScript build errors. So with the updated library you'll need to find/replace from the model tree 'getstream' with 'getStream'.
View full article
FloWorks 23.2.0 is now available (10 August 2023). This version of FloWorks is intended for use with FlexSim 2023 Update 2. If you are using FloWorks with FlexSim 2023 (LTS), please use FloWorks version 23.0.3 (LTS). If you are using FloWorks with FlexSim 2023 Update 1, please use FloWorks version 23.1.1. All versions can be found in the Downloads section of your FlexSim account on the 3rd party modules tab. Please do not hesitate to report any bugs, usability improvements and feature requests to support@talumis.com. About FloWorks FloWorks is a 3rd party module developed and maintained by Talumis BV ( talumis.com). It provides faster and more accurate modelling and calculation of fluid systems than the default FlexSim fluid library. It is especially useful within the oil, gas, and bulk industry both for production and supply chain optimization. This module requires a FloWorks license with active maintenance. For any questions, please email support@talumis.com. Release notes View the full release notes in the online documentation. FloWorks 23.2.0 Updated FloWorks for FlexSim 2023 Update 2. FloWorks 23.1.1 All bug fixes in FloWorks 23.0.3 below. FloWorks 23.1.0 (12 April 2023) Segmented Pipe: removed virtual content and fixed properties Changed default content of flow tanks and mixer from 50,000 to 1,000. All bug fixes in FloWorks 23.0.2 below. FloWorks 23.0.3 Bug fix: Module dependency shows invalid module version number All bug fixes in FloWorks 22.0.7 below. FloWorks 23.0.2 (12 April 2023) Moved flow control to toolbox Support time series, level triggered event and workability in Tools.create and Tools.get Added context menu option to disable time series in toolbox Added dot syntax to create flow controls, get the default flow control, and request recalculation of a flow control's network All bug fixes in FloWorks 22.0.6 below. FloWorks 22.0.7 Bug fix: Multi-product FlowTank blocks simulation when layer is small but nonzero, due to rounding error. FloWorks 22.0.6 (12 April 2023) Bug fix: Loading Arm processing time defaults to 10. Bug fix: fixed length³ to volume conversion for models in non-metric units Bug fix: fixed error message "FlowConveyor content changes are not allowed" Bug fix: flow conveyor density calculation was incorrect with non-metric units Bug fix: icon grid was not filtering correctly when A-connecting Bug fix: flow conveyor properties panel fixed Bug fix: added vertices editor to polygon flow tank quick properties Bug fix: after the first vessel, vessels were positioned incorrectly in the loading point Bug fix: quick properties now shows "Max. rate" for Loading Arm and "Max. inflow" for Flow Sink. Bug fix: removed duplicate Input / Output / Content change events from event selection list Bug fix: corrected input / output in on change event at end of warmup Bug fix: on empty and on full event would sometimes fire twice
View full article
FlexSim 2023 Update 2 is now available for download. You can view the Release Notes in the online user manual. FlexSim 23.2.0 Release Notes For more in-depth discussion of the new features, check out the official software release page: FlexSim 2023 Update 2: NVIDIA Omniverse, USD, Restricted Models, and more If you have bug reports or other feedback on the software, please email dev@flexsim.com or create a new idea in the Bug Report space or Development space.
View full article
En este video aprenderán a utilizar el objetivo visual Text para nombrar partes de un modelo de simulación o visualizar estadísticas de forma dinámica. Para más videos tutoriales pueden suscribirse al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips.
View full article
Cycle time per process.fsm When we create simulation models, sometimes we limit ourselves to modeling a process or a part number, in which we have problems. But what happens when we run a complete production plan with different part numbers, which in turn run at different cycle times? This is a way that allows you to run different part numbers in as many different processes as you want, each with its own cycle time. What do we need? 1.- Label on each item that identifies what part number it is. For example • Label: IDpart Value: ABC12 2.-Label on each machine that identifies the process that it runs. For example • Label: Station Value: Processor1 3.- Global table that contains in the columns the different processes that you have in your production line and in the rows the different part numbers that can run in your line. The information contained in this table will refer to the cycle times of each part number in each process. For example: Processor1 Processor2 Processor3 ABC12 20 10 13 BAC21 11 30 20 4.-You must configure the processing time of your processor referring to your global table and placing the following: With this, we will obtain weighted usage statistics for the equipment. Benefits: 1. You can run different part numbers on the same line with their respective cycle time each. 2. More accurate use of equipment and operators. 3. Modifying cycle times in a global table is easier and more controlled than directly in the processes. 4. Calculation of the capacity of your line running a real production plan will be accurate. 5. You can add and remove processes or even modify the flow of your line without affecting the cycle time that processor runs. Since it is referenced by its label to the global table.
View full article
Cycle time per process.fsm Cuando creamos modelos de simulación en ocasiones nos limitamos a modelar un proceso o un numero de parte, en el cual tenemos problemas. Pero ¿Qué pasa cuando corremos un plan de producción completo con distintos números de parte, que ha su vez corren a diferente tiempo ciclo? Este es una manera que permite correr diferentes números de parte en diferentes procesos tantos como tu desees, cada uno con su tiempo ciclo. ¿Qué necesitamos? 1.- Etiqueta en cada ítem que identifique que numero de parte es. Por ejemplo Label: IDpart Value: ABC12 2.-Etiqueta en cada máquina que identifique el proceso que este corre. Por ejemplo Label: Station Value: Processor1 3.- Tabla global que contenga en las columnas los diferentes procesos que tienes en tu línea de producción y en los renglones los diferentes números de parte que pueden correr en tu línea. La información que contenga esta tabla será referente a los tiempos de ciclo de cada numero de parte en cada proceso. Por ejemplo: Processor1 Processor2 Processor3 ABC12 20 10 13 BAC21 11 30 20 4.-Deberas configurar el tiempo de procesamiento de tu processor haciendo referencia a tu tabla global y colocando lo siguiente: Con esto, obtendremos estadísticos ponderados de utilización para los equipos. Beneficios: Puedes correr diferentes números de parte en una misma línea con su respectivo tiempo ciclo cada una. Utilización de equipos y operadores mas exacta. Modificar tiempos ciclos en una tabla global es más fácil y controlado que directo en los procesos. Cálculo de capacidad de tu línea más exacta corriendo un plan de producción real. Puedes agregar y quitar procesos o incluso modificar el flujo de tu línea sin afectar el tiempo de ciclo que corra ese procesador. Ya que esta referenciado por su etiqueta a la tabla global.
View full article
En este vídeo aprenderá a utilizar la propiedad Send To Port de los Objetos 3D para definir el enrutamiento de los Flowitems. Para más videos tutoriales pueden suscribirse al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips.
View full article
So you were trying to record a video and the Video Recorder crashed. Here is a step by step guide on how to debug the video recorder: Step 1: Is there a stop time in your model? In some older versions of FlexSim (2022 and older), if there is a stop time in your model that occurs during the model recording time, the video recorder will stop indefinitely at that time. Try removing any stop times that occur during the recording. Step 2: Are your graphics drivers up to date? (!!!!!!!!) This is the most common problem we see as to why the video recorder is not working. To start debugging this issue, check what graphics card FlexSim is using. If you open a model then go to Help->About FlexSim from the top toolbar, you will open this screen: The highlighted text that starts with OpenGL tell you what graphics card FlexSim is using. Now you might need to do a little research to find what graphic driver that card is using and then update it, but we've found in most cases updating the graphics card driver fixes video recorder issues. There's an article here the explains more on how to update your graphics driver (it's a very helpful read that also deals with some special cases on certain graphics cards). Make sure you're downloading either directly from the makers website or for your specific computer type, your computer will not be able to tell you if they're out of date. Step 3: Can you record a simpler video? If you make a very simple model (Source -> Queue -> Processor -> Sink), can you record it without issue? If you can then at least you know your video recorder can work, it is just having a issue with running a bigger model. If this is the case you might be running out of VRAM or the bigger model is taking so long to run that an outside manager is forcing it to close. If that is the case, here are some things to try: Update your graphics drivers (this is always the biggest issue). Revert your graphics card settings back to their defaults (for example, in the Nvidia Control Panel). For the next two suggestions go to the toolbar and click File -> Global Preferences -> Graphics Turn off Shadows and/or record a smaller resolution video to see if that helps. If it does, then the issue is likely that FlexSim is trying to use more VRAM than your graphics card supports. Make sure you are using the Recommended OpenGL Context and not the Generic (No GPU acceleration) or Core Profile contexts Step 4: Did your model crash previously while recording a video? There may be some corrupted data stuck in your recorder. It's an easy fix, you just need to delete your video recorder from toolbox and then re-add it: Step 5: Are you using a much older version of FlexSim? In a few of the older versions (2019.1 and older), temporary nodes from previous recordings caused the Video Recorder object to crash. To fix it all you need to do is clear the nodes Open your model, execute the following script, and then re-save your model: Model.find("Tools/VideoRecorder>variables/active").subnodes.clear(); Step 6: None of the previous steps worked, what now? If none of the above steps worked you can create and check debug log files, check the filepath, check various computer settings, and see the VRAM allocations to start solving the problem. Check the filepath FlexSim is putting the video from the video recorder into, if it is trying to put it in a strange place that can cause issues. FlexSim uses ffmeg to encode the captured frames into a video file, which leaves an ffmpeg_log.txt file in the directory where the video was created. That file might have something in it that might explain what is happening. You can create fslogfile.txt and exfslogfile.txt files in your FlexSim 20XX Projects directory that FlexSim can write debugging information into. Sometimes you might be able to get exception information in those files in certain cases: Check your settings in Windows or Group Policy or other IT software. It's possible something on your machine separate from FlexSim is noticing that the FlexSim process is taking a long time to respond and then killing its process because it thinks it is hanging. Try clicking the Record button and then not clicking any more anywhere on your machine until the recording is done. Or try recording a shorter timeframe to see if that helps. In Windows' Task Manager, you can add the Dedicated GPU memory column on the Details tab to see how much VRAM FlexSim is using. If FlexSim tries to use more memory than your graphics card supports (or if the total usage of FlexSim and all the other programs currently running exceed your hardware's limit), then FlexSim will likely crash the next time it tries to allocate and use any graphics objects stored in GPU VRAM (such as output buffers used by the Video Recorder, textures, 3D meshes, etc.). You can use Window's "dxdiag" tool to see how much VRAM your graphics card has: If you tried all of these suggestions and nothing worked, feel free to ask a question on the Q&A board and we can work to see if we can find a different solution for you!
View full article
Does your license server randomly stop serving your FlexSim seats? When you check the logs, do you see error messages like the following? 5:34:42 (flexsim) Trusted storage binding change detected! Vendor daemon is being shutdown. 5:34:42 (flexsim) EXITING DUE TO SIGNAL 65 Exit reason 42 5:34:47 (lmgrd) flexsim exited with status 65 (Trusted storage binding change detected.) 5:34:47 (lmgrd) Vendor daemon shutdown - trusted storage binding change detected. Hosting your license server on a VM Many organizations host their license servers on virtual machines and this is a perfectly fine and fully supported way to host your licenses. However, you should be aware that some common virtual machine workflows can disrupt your license server. For instance, you may encounter the errors above if you restored your VM's state to a live snapshot. A live snapshot takes a snapshot of memory as well as disk. Hyper-V is an example of a hypervisor that allows live snapshotting. If a virtual machine does a revert-to-live-snapshot operation, restoring the memory and disk to a previous state, it can interrupt the integrity of your server's trusted storage. To prevent any corruption of your Trusted Storage state, Trusted-Storage-based license servers poll Trusted Storage every 90 seconds to determine if a snapshot restore has occurred. Hence, within at least two minutes of a Trusted-Storage-based license server being reverted, it will shut down, with server log messages similar to the log block above. This shutdown helps preserve the integrity of your license server's local trusted storage. Restoring functionality As you can see at the bottom of the error message, the resultant state of your license server is that your vendor daemon is shut down. This means that your license server is no longer serving FlexSim seats. To restore full functionality, simply start (or restart) the service from Windows services: Or if you're hosting your licenses with lmadmin, start (or restart) the vendor daemon:
View full article
This example model was created to clarify pre-emption of task sequences created in process flows. Specifically it will show: 1) that pre-empting of task sequences generated in a process flow does not require the token generating the sequence to be pre-empted. 2) that you can push a partially created sequence to a tasksequence list type. 3) how to use recursive calls to a sub flow. and afterwards: 4) how the milestone task can be used in a PF generated task sequence 5) the use a task type nodefunction Each queue is a member of the object process flow shown in the picture. Each generates a box along with a job to take it to another queue and pulls an initial task executer from the list of FreeTEs. The first task to travel and collect the box is preemptable and so we push the task sequence to a Premptable Job list before we add that travel task and pull it off the list when the task has completed. The rest of the task sequence is a standard load, travel and unload set of tasks. When the job is complete we know that the TE is free and could preempt another TE - which could be desired if the newly free TE is closer to the pickup queue than the one currently assigned. The CheckPreempt subflow is called with a label referencing the TE that has become free. This TE may not be the original TE that we pulled from the list of FreeTEs when the job was started, and so we discover the current TE by looking for the task sequence's owner object - done with the function findownerobject() since it does not cache the value in the same way ownerobject() does, and we could have had more than two TEs assigned to this tasksequence. Since the task sequence gets destroy I do this in the assign labels activity before we finish the ts: The Check Preempt subtask first tries to pull a preemptable job where the invoking TE is closer than the current executer of that job, and chooses the one where it is the most pronounced difference. The taskssequence list type has the distance field prepopulated and from that the two other field expressions otherDistance and howMuchCloser are derived. If no task sequence is chosen then we push the finishing TE to the FreeTEs list. If we choose a task sequence to preempt then we first label the token with the otherTE and the preempt the task sequence with a simple zero delay task. The we reDispatch the task sequence to the te that is calling Check Preempt. Since we now know that we've freed the otherTE from the task sequence it had we can invoke the Check Preempt subflow for that task executer, knowing that it will either get put onto the FreeTE list or get another task sequence and preempt another TE. This the is a recursive subflow call with the exit condition being that no preempatable tasksequence is found. Note that no token preemption from activities takes place and this is because the tasksequence is still tied to the token and its activity, which will still receive the call back when a task is complete, regardless of which task executer actioned the task. TEpreemptIfCloser1.fsm Next: Changing colors to indicate preemptable TEs and their destination - using milestone and nodefunction tasks. To better see the allocation and preempt activities we'll now change the model such that the TE color reflects the following: White when idle Gray when busy and not preemptable Matching the pickup queue's color when preemptable. First of all I defined two user commands - one to set the color of the TE (setTEcolor) and another (commandNode) to get the executable node of a user commands. This is so that I can add the fn() SetColor activity (it's a custom task of type nodefunction) ; specify the function using: commandNode("setTEcolor") and pass in the queue color as parameter 2 (parameter1 will be the TE that is executing this task). This means the user command code is just: Object te=param(1); Color col=param(2); te.color=col; The milestone task allows us to define a set of tasks that need to be repeat if the TE gets pre-empted. It's added again using the Custom Task activity and selecting the type as Milestone and has a parameter to specify the number of subsequent tasks that should be considered within the preemption range that will trigger tasks to be repeated. Since we want the Travel preemption to retrigger the setting of the color we set the range to 2. However since the token only needs to be notified when the travel task is complete that it should go on to the next process, and until then wait in the travel task we can uncheck the "Wait Until Complete" box of the Milestone and fn() SetColor activities: If you don't clear this for those two tasks we expect some misbehavior/desynchonization with the token. Finally there needs to be a resettrigger on the TE to set the color to white and a regular Change Visual activity after the pickup point is reached to set it to gray. Note I prefer the use of commands to sending messages, which would be another option, mostly because users can tell the intent of the call without having to inspect the message code. TEpreemptifCloser2.fsm
View full article
En este video aprenderán a utilizar el objeto ASRS para representar el almacenamiento automatizado en estanterías. Para más videos tutoriales pueden suscribirse al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips.
View full article
Receiving products from the bottled water manufacturing plant using 11 Rial Guide Vehicles (RGVs) for transportation to the Automated Storage and Retrieval System (ASRS), which serves all three production lines. The RGVs' operation works as follows: when products are placed in the receiving buffer with two available positions, the system immediately sends commands to the RGVs to pick up the products from the buffer. The RGVs prioritize picking up the nearest products first. Consequently, the production line located farthest away (Production Line 1) experiences the highest downtime from January to May 2566, with an average downtime of 449.48 minutes. Production Line 2 has an average downtime of 65.12 minutes, while Production Line 3 experiences 76.00 minutes of downtime during the same period. If a production line stops for more than 20 minutes, it requires a complete drainage of the ongoing production process, including scrapping all unfinished goods. This significantly increases the production costs. Consequently, conducting experiments to address this issue becomes complicated, as full-scale production is necessary to identify the causes and implement effective solutions. Thus, stopping production for testing purposes is not a viable approach in this case.
View full article
This article is basically a follow-up to this question: https://answers.flexsim.com/questions/98195/simultaneous-all-or-nothing-list-pulls.html In version 22.1, we added a new FlexScript feature called Coroutines. Basically, this lets you wait for events in the middle of executing FlexScript, using the "await" keyword. Recently, I decided to revisit this question (how to pull from multiple lists at once) and to see if I could use coroutines to simplify the Process Flow. The short answer is: yes! Here is a model that behaves identically (in terms of which token gets its multi-list request filled first) but replaces about 15 activities with 2 Custom Code blocks: multipullexample_coroutines_ordering_onfulfill.fsm In addition to having fewer activities, this model also runs faster (from ~12s to ~2s on my computer). To determine whether behavior was the same, I added a Statistics Collector and logged request/fullfill times and quantities. I did the same in the original model. The token IDs are off because the older model makes more tokens. Here's the older version, but with that stats collector, if you want to do your own comparisons. multipullexample.fsm The key lines of code are found in the Acquire All activity: // ~line 47 List.PullResult result = list.pull("", qty, qty, token, partition, flags); if (result.backOrder) { token.Success = 0; await result.backOrder; return 0; } // ... The new item here is the keyword "await". When the FlexScript execution reaches this line, the FlexScript execution is paused, and waits for the "awaitable" that you specify. In this case, we want to wait for the back order to be fulfilled. In both models, tokens check all lists to see if they can acquire the complete set of resources. In both models, if a token can't immediately fulfill one of its requests, tokens "go to sleep" until something changes. In the old model, tokens would "wake up" if anything was pushed to the correct list. In contrast, tokens in the new model only "wake up" if enough items are pushed to fulfill their back order. Basically, the second model has fewer false "wakeups" and so runs quite a bit faster.
View full article
This article explores an example model. In this model, items on downstream lanes are able to reserve dogs so that items on upstream lanes cannot use them: reservedogdemo.fsm About Dogs in FlexSim FlexSim simulates dogs on a power-and-free system in an extremely abstract and minimal way. A dog isn't a persistent entity at all. Instead, FlexSim calculates where dogs would be, given the speed, and when they would interact with items. This has a huge performance benefit. But if your logic needs items to interact with specific dogs, this can pose a problem: how do you interact with such an abstract entity? The Catch Condition The only time you can "see" a dog in FlexSim is during the Conveyor's Catch Condition: https://docs.flexsim.com/en/23.1/Reference/PropertiesPanels/ConveyorPanels/ConveyorBehavior/ConveyorBehavior.html#powerAndFree The catch condition fires when a dog passes by an item. If the catch condition returns a 1, the item catches the dog and transfers to the power and free conveyor. If the catch condition returns a 0, the item does not catch the dog. During the catch condition (and only during a catch condition), you can learn many things about a dog: ID - each dog has an ID. The ID is derived from the length of the conveyor and by the distance the conveyor has travelled. If a conveyor is 26 dogs long, the dogs will have IDs 1 through 26. Location - Since an item is trying to catch the given dog, you can derive the dog's location from the items location. Speed - The conveyor that owns the dog is "current" in the catch condition. So you can get the speed of the conveyor at that point. We'll use all these pieces of information in a moment. Creating Tokens to Represent Dogs The first real insight into this model is to make a dummy item. The purpose of this dummy item is to cause the Catch Condition to fire. It never gets on the conveyor. But when the catch condition fires, it makes a token that represents the dog. In this example, that item has a label called "DogFinder" Here is the relevant code from the catch condition: if (item.DogFinder?) { Object pe = current.DogPE; if (pe.stats.state(1).value == PE_STATE_BLOCKED) { return 0; } double dist = current.MaxDogDist; double speed = current.targetSpeed; double duration = dist / speed; if (!item.labels["DistAlong"]) { item.DistAlong = Vec3(item.getLocation(1, 0, 0).x, 0, 0).project(item.up, current).x; } Token token = Token.create(0, current.DogHandler); token.DistAlong = item.DistAlong; token.Conveyor = current.as(treenode); token.Duration = duration; token.DogNum = dogNum; token.Speed = speed; token.DetectTime = Model.time; token.release(1); return 0; } There's a lot going on in this code: This logic only fires for the fake dog finder item If the photo eye just upstream from the dog is blocked, that means there is an item, and this dog is not available. Return here if that's the case. Figure out how long this dog will last (the duration), assuming the conveyor runs at the same speed. In this model, there's a label on the conveyor called MaxDogDist. This is the distance from the PE to the end of the conveyor, minus 2 meters. If this is the first dog ever, calculate the position of the dog, given the position of the item. Store that on a label. Create a token with all kinds of labels. We'll need all this information to estimate where the dog is later, and to estimate how far it is from other items. Pushing Dog Tokens to a List Once the token is made, we need to push it to a list, so that items can pull them. If all your items are the same size, you can just push the token to a list directly. In this model, however, there are larger items that require two dogs. So there's a batch activity first. The dummy item is far enough back that it can detect two dogs and still push the first dog to the list in time for the first lane. So it holds the dog back in a Batch activity until one of two things happen: Either the next dog token appears, completing the batch. Or the max wait timer on the batch expires, indicating that the next dog is not available. Otherwise, there would have been a token. This duration is based on the conveyor's speed and dog interval. If the batch is complete, the first dog in the batch can be marked as a "double", meaning the dog behind it is also available. Once the flow has determined whether the dog is a single or double, it then pushes it to the list. Creating the DistToDog Field When pulling the dog from the list, an item needs to know the position of the dog relative to the item. Is it 0.3 meters upstream? Or is it 2 meters downstream? When we query the set of dogs, we need to filter out downstream dogs and order by upstream dogs, to reserve the closest one: WHERE DistToDog >= 0 ORDER BY DistToDog Here, DistToDog is positive if the dog is upstream, and negative if the dog is downstream. The code for this field is as follows: /**Custom Code*/ Variant value = param(1); Variant puller = param(2); treenode entry = param(3); double pushTime = param(4); double distAlong = Vec3(puller.getLocation(1, 0, 0).x, 0, 0).project(puller.up, value.Conveyor).x; double dt = Model.time - value.DetectTime; double dx = value.Speed * dt; double dogDistAlong = value.DistAlong + dx; return distAlong - dogDistAlong; This code assumes that the item waiting to merge is the puller. So we calculate the item's "dist along" the main conveyor. Then we estimate the location of the dog since the DogFinder item created the token. Then we can find the difference between the item's position and the dog's position. Pulling Dogs from the List Each incoming lane has a Decision Point. The main process flow creates a token when an item arrives there. At a high level, this token just needs to do something simple: pull an available downstream token. If all the items are the same size, it's that simple. But this example is more complicated! If the item is large, we also need to pull the upstream dog behind the dog we got, so that no other item can get that dog. And it gets even more complicated! It can happen that an item acquires the dog after a double dog. In that case, we need to mark the downstream dog as "not double", so that big items won't try to get it. So most of the logic in the ConveyorLogic flow is handling that case. Using the Dog Finally, the item must be assigned to that dog. The ConveyorLogic flow sets the DogNum label on the item. Then, the catch condition checks to see if the dog matches the item's DogNum. Upstream Items The final piece of this model is allowing upstream items to catch a dog on this conveyor. This model adds a special label to those items called "ForceCatch". The catch condition always returns true for those items.
View full article
Top Contributors