FlexSim Knowledge Base
Announcements, articles, and guides to help you take your simulations to the next level.
Sort by:
This is a demo model for the new warehouse functionality found in version 2019 Update 2: warehouse-demo-model.fsm The basic premise of this model is that items of a particular type come in, and must be placed in slots for that type. Orders also come in, requiring items of a particular type, that must be retrieved from storage. The model is meant to be a general concept model. It demonstrates the use of many of the new features in 19.2, and embodies some high-level "how-to's" of warehousing that are discussed in the user manual. Most logic for the model is implemented in a process flow. The process flow logic is separated into three main categories, namely initial inventory, inbound, and outbound processes. Further, the outbound process demonstrates both random-based order generation as well as history-based order generation. Initial Inventory The model includes a Global Table of Initial Inventory. The process flow's initial inventory section reads this table, and then creates items and places them into slots based on that initial inventory. This logic relies on the Address Scheme defined in the Storage System object, and uses direct addressing to get a slot using Storage.system.getSlot(). Inbound I use the process flow to assign a slot to each incoming item. I use an Assign Labels activity called Find Slot to do this. This uses a pick list option that wraps a call to Storage.system.findSlot(). The query matches the Type of the item with the Type of the slot, and also ensures that the target slot has space to fit the incoming item. The query also randomizes the order. Randomizing the order would likely not be necessary in most situations, but it makes the demo look nice. If the Find Slot activity properly finds a slot to store the item, then I go ahead and assign the the item to that slot, and have an operator place it in the rack. Outbound I also use the process flow to generate orders, and to reserve items in the storage system for those orders. In most warehouse simulations, order generation can be driven in two ways. First, you can use random probability distributions to generate orders based on general throughput metrics. Second, order generation can be based on historical data. This model gives an example of each method. In the random method, orders are generated randomly every ~30 seconds. Each order includes a number of SKU line items (again, random) and each line item includes a quantity of that SKU (again, random). Order tokens spawn line item tokens, which in turn spawn tokens associated with individual picks (the Fill Out Individual Picks process). For each pick, the token finds an item in storage that matches the target SKU. This is an Assign Labels activity (Find Item by SKU) with a pick option that wraps a call to Storage.system.findItem(). It finds an item that matches the required type, again using a query. Once the item is found, it makes reserves the item as "outbound" by assigning the Storage.Item.assignedSlot property to null (Set to Outbound activity). This ensures that no other process will find that same item for picking. The history-based order generation process uses much of the same functionality as the random-based, but it instead reads an "OrderHistory" table to determine when orders are started and what those orders contain. The OrderHistory table represents a simplified format for what you would likely see in a standard orders table. First, the process flow creates a transformed table that aggregates each order into a single row (this could technically be done as post-import code, but I do it in the process flow for visibility). Then the process flow loops through that transformed table, waiting for the start time of each order, then spawning that order. Custom Rack Visualization I have also customized the visualization of the racks. I have added a text to the front (and bottom on the floor) of a rack slot that will show the address of that slot. Further I've given the text a background that is color-coded to the SKU that that slot is designated to store. This was all done through the Storage System's Visualizations tab. I customized the Rack visualization.
View full article
Hi everyone, recently @Arun Kr posted this idea to add a utilization vs. time chart to the available chart types in FlexSim. I had previously build a relatively easy to set up Statistics Collector for use in our models. I have since cleaned up the design a bit and thought to post it here, since this seems to be a commonly desired feature. utilization_vs_time_collector_24_0_fm.fsm All necessary setup is done through labels of the collector. The first three are actually identical to labels found on the default Statistics Collector behind a state bar chart. - Objects should point at a group that contains all objects the collector should track the utilization for. - StateTable is a reference to the state table that will be used to determine which state counts as 'utilized'. - StateProfile is the rank of the state profile that should be read on the linked objects (0 for default state profile). - MeasureInterval is the time frame (in model units) over which the collector will take the average of the utilization. - NumSubIntervals determines how often that measurement is actually taken. In the example image above (and the attached model) the collector measures the average utilization over the last 3600s 12 times within that interval of every 300s. Each meausurement still denotes the utilization over the complete "MeasureInterval". The graph on the left takes a measurement every 5 minutes, the one on the right every 60 minutes. Each point on both graphs represents the average utilization over the previous hour from that time point in time. - StoredTimeMap is used to allow the collector to correctly function past a warmup time, by storing the total utilized value of each object up to that point. This should no be manually changed. Since this last label has to be automatically reset, remember to save any changes made to the other labels by hitting "Apply". The collector works by keeping an array of 'total utilized time' value for each object as row labels. Whenever a measurement is taken, the current value is added to the array and the oldest one is discarded. The difference between the newest and oldest value is used to calculate the average utilization over the measurement interval. The "NumSubIntervals" label essentially just controls how many entries are kept in that array. To copy the collector into another model you can create a fresh collector in the target model. Then copy the node of this collector from the tree of the attached model and paste it over the node of the fresh collector. I hope this can help to speed up the modeling process for some people (at least until a chart like this is hopefully implemented in FlexSim) or serve as inspiration for how one can use the Statistics Collector. I might update the post with a user library version if I get to creating it (and if there is demand for it). Best regards Felix Edit: Added user library with the collector as a dragable icon to the attached files. Edit2: I noticed a bug while using the collector. Having the tracked objects enter states that are marked as "excluded" in the state table would lead to incorrect utilization values (possibly even below 0 or above 100%). Replaced the library with an updated version that fixes this. Edit3: I fixed another bug that resulted in a wrong utilization value for the first measurement after the warmup time if the object spent time in an excluded state prior to the warmup. utilization-vs-time-collector-library-20250212.fsl
View full article
Engage with the FlexSim community here on the FlexSim forum boards. Check out our learning resources. Customers with current licensing can request direct technical support from their FlexSim representative, via phone, email, web meeting, or support ticket.  
View full article
This model and library will allow you to produce a heat map of anything moving in the model - including AGVs and Flowitems. To add this to a model is simply a matter of : 1) Load the attached user library 2) Add objects to the group HeatMapMembers 3) Drop a heat map object (cylinder) into the model - reset and run. With this updated version you can now you can now have multiple mapper objects in the same model showing different groups - made easier by the addition of a 'groupName' label on the mapper. You can easily change the height at which the map is draw using the 'zdraw' label and alter the sampling interval and grid size using the 'heatInterval' and 'resolution' labels. The resolution is the number of divisions per model length unit. In the example model set to metres, a value of 2 gives 4 divisions per square metre. Currently non-flowitems are set to ignore time when the object is in an idle state. HeatMapAnything.fsl HeatMapAnything.fsm
View full article
Using Mixamo Fuse, Mixamo, and (optionally) 3ds Max, you can create animated characters for use with FlexSim. For modifying the existing operators' animations, see Adding More People Animations From Mixamo Character Creation You can customize a character’s look using Mixamo Fuse: After creating the character, you can save your character as a .fuse file in case you want to edit it again later in Mixamo Fuse. Character Rigging From Mixamo Fuse, you can press the Animate button in the upper-right corner to upload the character to Mixamo’s website to apply animations. You will need to log in with a Mixamo account. You will be presented with the Mixamo Auto-Rigger. After a few moments, your character will appear in a view with a default animation applied. Choose whether to enable facial blendshapes and select the appropriate level of detail for the skeleton, such as 2 Chain Fingers, and then press Update Rig. The auto-rigger will reapply and the view will be updated with the newly rigged character. If the settings are good for your needs, press the Finish button. Character Animating Once the character is rigged, you can select it on the Mixamo website and press Find Animations to find animations to apply to the character. You can search for animations and add them to packs. For each animation, you can adjust various parameters to fit your character to that animation better. Exporting the Character and Animations Once you have applied and customized animations for your character, you can add them to the Cart and checkout in order to have them available for download. After checking out, you can select an animation or animation pack to download on the My Animations page. You can apply the same animation sets to different characters using the Change Character button after selecting the animation set. Press the Queue Download button to tell the Mixamo server to create the files for download. You want to use the FBX format so that we can edit it with 3ds Max. The pose can probably be T-Pose or Original because the Original is probably a T-Pose anyway. I use 30 Frames per Second for consistency and ease of timing animations. Keyframe Reduction didn’t seem to do anything in my tests, and you can do it yourself with more control in 3ds Max so I use “none.” If you exported multiple animations, the output will contain one fbx file that is the character shape and several other fbx files that contain only animation data for each animation. (Update) Important Note: Starting with FlexSim 2017 Update 2, the rest of the steps in this document are no longer necessary. FlexSim 17.2 added support for embedded textures, specular maps, and gloss maps that fix the material issues directly without modification in 3ds Max. FlexSim 17.2 also added support for assigning animations to a shape from multiple files, so you can import the mixamo output files directly without having to merge all the animations into a single file. The following steps can be used if you want to optimize or otherwise modify the shape files, but are no longer required. Preparing the Character using 3ds Max Import the character shape .fbx file into 3ds Max and save the file as a .max file. You will want to resave as a .max file at different points with different file names as you progress so that you can easily revert back to a previously known working point if something messes up. The FBX Import window will show you lots of options you can modify. The defaults seemed to work fine. I tried messing with the “Units” scale factor and file unit conversions, but I was unable to find any options that improved the scaling in FlexSim. Ultimately, I found it best to just use Automatic and sort out the units myself using FlexSim’s shape factors. Export the file as fbx and bring it into FlexSim to see how it looks. Again, the default options seem to work fine. In FlexSim, the size will probably be 100x too big (no matter what “unit conversion” options I seemed to set in the import/export options). Resize the character to be 100x smaller. Fixing the Materials in 3ds Max When you first bring in a Mixamo Fuse character into FlexSim, it will probably look shiny and dark. That can be fixed in 3ds Max. Open the Material Editor using the button Double-click each material in the list of Scene Materials on the left to add it to the middle view for editing. Press the “Lay Out All – Vertical” button to arrange the nodes nicely in the middle view. When you imported the .fbx file, 3ds Max should have automatically unpacked its textures into a directory called CharacterFileName.fbm. When you double-click on a texture node in the Material Editor view, you should be able to see and edit the path where it is referencing each texture in the Material Parameter Editor pane on the right. Removing the shininess Each material has Ambient, Diffuse and Specular colors, in addition to any textures that are mapped to various channels. The operator is shiny everywhere because each material’s Specular color is set to White and their specular maps are mostly black. Since FlexSim doesn’t currently read specular maps and gloss maps, the white specular color is being applied everywhere instead of the black color from the specular maps. To fix it, click each specular map and gloss map in the center view and delete them with the Delete key. Then double-click on each main material and set its specular color to black to turn off the shininess. Save the .max file with a new name, and re-export the .fbx file to test it in FlexSim. The shininess should be gone. Brightening the darkness FlexSim uses Assimp to read fbx files. Assimp’s fbx importer is setting a default Ambient color value of dark gray to the materials instead of leaving it off. This is why the character looks darker than he should. To fix it, we can simply specify an ambient color of white on each material. For each material, set the ambient color to white and check the Ambient Color box under Maps: Save the .max file with a new name, and re-export the .fbx file to test it in FlexSim. The darkness should be fixed. Setting local texture paths Based on the procedure above, the texture paths are likely to be absolute paths when you export the fbx file. You can fix that by saving the .max file, the .fbx file, and the textures all in the same directory. In Windows Explorer, copy all the useful textures from the CharacterName.fbm directory into the same directory where you have been exporting the .fbx file. Then also save your .max to that same directory. Once everything is in the same directory, you can reselect the texture file paths for each material so that they are pointing at these files instead of the other files. Then, when you export the fbx file, each texture’s path will be relative so you can copy all the necessary files onto any computer and use them correctly. After changing all the materials, export the fbx file again. You should be able to copy the fbx file and all its textures into a directory on a different machine and have them display properly. Save your .max file with all the updated materials. Making part of the object change color To make a material show the FlexSim color, append _fsclr to its name in 3ds Max: Configuring Animations With the character loaded, you can import the fbx files with just animation data, and it will automatically apply that data to the shape. The slider at the bottom controls the currently applied keyframe. The buttons in the bottom corner play the animation, pause, or step between keyframes. Animation data can be stored beyond the relevant range. You can open the Time Configuration dialog by pressing the button in the bottom corner. In that dialog, you can specify the speed of the playback and set a Start Time (keyframe number) and End Time for the animation you want to see in the viewport when you play. This dialog is only changing the playback options in the software; the actual data outside the specified range is still preserved. You may need to adjust these values as you import/modify/combine animations into one Timeline. Keyframes If you push the button on the top toolbar, it will open the Track View – Curve Editor. You can edit and modify Keyframes through this view. Click in the left pane and press Ctrl-A to select everything in the model. You should see keyframes appear if you have animation data loaded. Sometimes this window doesn’t refresh correctly and you can’t find keyframes. You can try closing it and reopening it, or pressing the and buttons to re-center the view on the keyframes in the configured time range. To edit the animations, you need to be able to see the keyframes in this view. You can drag the current frame by dragging the yellow bar around. You can zoom by scrolling the mouse wheel. Ctrl-mouse wheel will zoom the timeline (X-axis) without zooming the key values (Y-axis). You can select keyframes by dragging a rectangle around them. You can delete selected keyframes with the Delete key. You can move keyframes by dragging them. If you hold Ctrl while dragging them, then it will force the movement to be only along the X-axis and not along the Y-axis so that you don’t actually edit the keyframe’s values, just its time. You can duplicate keyframes by holding Shift and dragging them. Again, holding Ctrl will prevent the new keyframes’ values from shifting by holding the Y-axis still. You can scale keyframes by selecting them, putting the key keyframe indicator on the first keyframe in the selection, picking the menu option Edit > Transform Tools > Scale Keys Tool, and then clicking and dragging on one of the selected keyframes to scale all of the selected keyframes towards the yellow bar. This can be helpful to sync the timing of animations, such as walking empty vs. walking loaded. Preparing Animations for Merging When you load an animation into the file, the Track View – Curve Editor will show you the individual components that have keyframe animations. Different animations may affect different components and the interpolation of transformation information between keyframes may get messed up when you merge different animations that affect different components together into one timeline. To fix this, you can stamp a keyframe at the beginning and end of each animation that stores each component’s position at that point. You can also duplicate the first and the last keyframes to make it easier to specify perfectly-repeating animation clips in FlexSim later. Stamp a keyframe by moving the current frame (yellow line) to the keyframe you want. Then select all within the Track View – Curve Editor to select every component. Finally, press the Set Keys button ( ) at the bottom of the main 3ds Max window to store each component’s value as a keyframe. Duplicate keyframes by holding Shift and dragging them. Also holding Ctrl will prevent the new keyframes’ values from shifting by holding the Y-axis still. Merging Multiple Animations into One File To merge multiple animations into one file, you need the export each of the animations as an animation file (XML Animation File (*.xaf)). Then you can import each of these animations into a specific range of keyframes in the timeline. From the .max file without any animations loaded, import one of the fbx animation files. It will automatically apply it to the shape. Click in the Perspective view and press Ctrl-A to select everything. Then prepare the animation using the information above. Lastly, select the menu option Animation > Save Animation to save the animation as an XML Animation File (*.xaf). Reload the .max file without any animations and repeat this process for each animation you want to merge. Reload the .max file without any animations loaded. Click in the Perspective view and press Ctrl-A to select everything. Then select the menu option Animation > Load Animation to load an animation into the timeline. On the right side of the Load XML Animation File dialog, you can specify the keyframe number at which the animation should be inserted. Do not insert the animation at keyframe 0; make sure that keyframe 0 is the bind T-pose. This will be important later if you want to edit the mesh without breaking the skeletal rigging. You also need to be sure to specify Absolute and not Relative. Select the animation you want to import, specify the keyframe you want to insert at, and then press Load Motion to load the keyframes into the animation. Do this for each animation file you want to merge. Use the Track View – Curve Editor to determine the keyframe at which to insert each subsequent animation. Save your .max file with all the merged animations to a new file. Modifying the Mesh without Breaking the Rigging Sometimes you may want to modify the mesh after you have applied animations. I can’t guarantee that this always works perfectly, but there is a way to make some tweaks even after animations have been applied. Before trying to edit a skinned mesh, be sure to save your work so you can revert back to a clean state if something don’t work correctly. First, you need to be sure that the bind pose is frame 0 and that you are set to frame 0 on the timeline. Second, click on a mesh in the scene and click on the Modifiers tab. You will probably see an Editable Poly with a Skin modifier applied. If you click on the Skin modifier and expand the Advanced Parameters, you will see an Always Deform checkbox. If you clear the Always Deform box, you can then click on the Editable Poly and modify it. You then need to recheck the Always Deform box on the Skin modifier once you are done making edits. Ensure that your animations still work after making edits. In my tests with the operator shape above, when I cleared Always Deform, the mesh moved up slightly. This made the Tops mesh not line up correctly with the Body mesh. To fix that, I cleared and immediately checked Always Deform for each mesh (each mesh then moved up the same amount). Then I verified that the animations still worked and that the arms still lined up correctly with the shirt. Reducing Polygon Count 3ds Max has features for reducing the polygon count of an object. You can apply these modifiers without breaking the rigging by following certain steps. You can display the polygon count of your shape by clicking in the Perspective view and pressing the 7 key on the keyboard. As stated above, before modifying the mesh, clear Always Deform on the Skin modifier. To reduce the polygon count, you can use the ProOptimizer modifier. After clearing Always Deform, click on the Editable Poly and then select ProOptimizer from the Modifier List to add it to the stack between the Editable Poly and the Skin modifiers. In the Optimization Options, be sure to check Keep Textures. Then press the Calculate button. Then you can specify a Vertex % and it will start to remove vertices from your mesh. After applying the ProOptimizer modifier, the normals on your mesh might be messed up. You can recalculate the normals based on a crease angle using the Smooth modifier. Apply the Smooth modifier after the ProOptimizer and before the Skin modifier. Check the Auto Smooth box and specify a Threshold. You can use a value of 89 to get a very smooth surface, only applying a crease to angles that are greater than 89 degrees. After making these changes, be sure to recheck Always Deform on the Skin modifier, and test your animation to make sure the rigging is all still working properly.
View full article
En este video aprenderemos a usar el ambiente de FlexSim HC para construir desde cero un modelo de atención de pacientes. Aprenderas a: Importar un plano de AutoCAD Crear objetos 3D Construir la lógica del modelo usando Process Flow Obtener estadísticas Diseñar un experimento Para más videos tutoriales pueden suscribirse al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips HC.
View full article
Attached is an example model and user library comprising commands to return an array of objects whose bounding boxes intersect, and a Collision Detection object to drop into your model. The Collision Detection has a ticker interval label to adjust the frequency of checks and will switch the colliding objects to selected. It looks for two groups - "Obstacles" containing static objects in the scene (which may be overlapping and not recorded as collisions) and "Colliders" which are the objects navigating the scene and should be checked for intersecting bounding boxes. In the example model I'm adding the flowitem when it is created using Group("Colliders").addMember(item) The detector code is on its FlexScript label, 'analyseScene', which is first scheduled to run by the object's reset trigger. collisionDetection3.fsm BBCollisionDetection2.fsl
View full article
In addition to the animations that are on the Operator and Person flowitem by default, you can easily download and add more animations from Mixamo. Below are the steps to do that. If you want to create new characters with animations, see Bone Animations 1. Download the attached two files: FlexSim_Operator.fbx and FlexSim_F_Operator.fbx. These are the versions of the male and female operator shapes as exported from Mixamo originally, before they were modified and optimized in 3ds Max. 2. Log into your account at https://www.mixamo.com and press the Upload Character button in the Characters section of the site. 3. Drag the FlexSim_Operator.fbx or FlexSim_F_Operator.fbx shape from step 1 onto the window that appears. A progress bar will appear as the shape uploads to Mixamo's server. Once it is finished uploading, press the Next button: 4. On the Animations section of the site, select an animation you want to apply. Adjust the parameters as desired and press the Download button: 5. Select "Without Skin" in the Skin section and press the Download button. This will download just the animation file rather than the animation, the mesh, and the bones. The mesh and bones are already in the software so you only need the animation if you are editing the standard male or female operator shapes. 6. In FlexSim, edit the Animations for an Operator or a Person flowitem: 7. In the Animations and Components window, select Edit Animation Clips: 8. In the Animation Clips window, press the Plus button to add the animation you downloaded in step 5 above. Optionally, you can edit the animation clip's name and press Apply. You can also edit the clip's length or split the animation into multiple clips using this window. 9. Close the Animation Clips window when you are done. 10. In the Animations and Components window, add a new Animation and name it. Then add an Animation Clip to that animation. Then select the clip and set its Animation and Clip values to the animation/clip set in step 8: 11. Now you can use the new animation in your model on this operator or person flowitem.
View full article
This article reviews one method for making a state Gantt chart for the default and alternate state profiles: Example Model You can download the model for this walkthrough ( stateganttdemo.fsm). The model has two multiprocessors, in a Group called Multiprocessors. Each multiprocessor has two processes: Process1 and Process2. To make the chart, we will first make a Statistics Collector, and then a Calculated Table. Making the Statistics Collector Make a new Statistics Collector. On the Event Listening tab, use the Sampler to listen to On State Change of the group of multiprocessors. You can leave the parameter names alone. However, we need to add a label, so we can record the profile number. Select the new event, and then use the green plus button in the Event Labels area to add a label for this event. Set its name to ProfileNum, and its value to the following code: data.StateProfileNode?.rank The event settings should look something like the following: Next we need to set the row mode. Make sure it's set to Add Per Event, with no row value. As the final configuration step for the statistics collector, we need to set up the columns. There should be four columns in this collector: Time - In the pick options, select Time, then Model Date/Time Object - In the pick options, select IDs, then ID of Event Object Profile - Type data.ProfileNum for the value. The default storage and display format are fine. State Type the following code: data.eventNode.as(Object).stats.state(data.ProfileNum).profile[data.ToState + 1][1] Set the Storage Type to String The code is necessary because On State Change occurs before the state is set to the new state. So the code is looking up the name of the future state in the profile table. When you reset and run this model, you will see a table like the following: Making the Calculated Table Make a new Calculated Table, and give it the following query: SELECT Object, Time as StartTime, LEAD(Time) OVER (PARTITION BY Object) AS EndTime, State FROM StatisticsCollector1 WHERE Profile = 1 This query creates an Object column as well as a Time column. To get the time that the current state ends, we look to when the next state begins. The LEAD() function looks ahead in the table, and the OVER(PARTITION BY Object) clause makes sure that LEAD() makes sure to look to the next row with the same Object. We also record the state column, and filter out the standard state profile, keeping the special multiprocessor state profile. Once you get this query to work, change the Update Mode to By Interval, and set the interval to 20 or 30. Since the Statistics Collector table will get longer and longer, the query will become more and more expensive as the model runs. To control how much time is spent running the query, we use an interval. The final configuration of the Calculated Table should look like this: You will need to set the Display Format of each column on the Display Format tab (Object, Date/Time, Date/Time, and Raw). Making the Chart Make a new dashboard, and create a Gantt chart. Point it at the Calculated Table. When you do that, the chart should fill in all the other columns correctly. Charting Both State Profiles for Both Objects In order to chart both profiles on the same chart, we first need to add a column to the Statistics Collector, and then update the query in the Calculated Table. The new column should be named ObjectAndProfile, and a Storage Type of String. Use the following code for a value: data.eventNode.name + " - " + string.fromNum(data.ProfileNum.as(int)) Then change your query to the following: SELECT ObjectAndProfile, Time as StartTime, LEAD(Time) OVER (PARTITION BY ObjectAndProfile) AS EndTime, State FROM StatisticsCollector1 With these changes, you should be able to view both profiles for both multiprocessors.
View full article
En este video aprenderán a utilizar el Dispatcher para gestionar equipos de trabajo en un modelo de simulación. Para más videos tutoriales pueden suscribirse al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips.
View full article
En este video aprenderán a construir un modelo de simulación que representa un sistema de manipulación de material automatizado mediante conveyors. Para más videos tutoriales pueden suscribirse al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips.
View full article
At the current release state 2024.2.2 FlexSim does not provide a tool to export an animated USD stage of a model. There is a way around though by recording the FlexSim model connected to a NVidia Omniverse live session using one of the tools in NVidia’s USD Composer, the Animation Stage Recorder. This is a step-by-step tutorial to guide you through the process. It is assumed, when you are interested in this you already know how to connect FlexSim to a live session in NVidia Omniverse. This may not be ther perfect way to go, but it worked for me. Record_Animated_Stage_from_FlexSim.pdf
View full article
FlexSim's Webserver is a query-driven manager and communication interface for FlexSim. It allows you to run FlexSim models through a web browser like Google Chrome, FireFox, Internet Explorer, etc. Since the FlexSim Web Server is a basic service to allow FlexSim to be served to a browser, you may decide you want a way to proxy to this service through a full service web server that you can control security and authentication through. This guide will walk you through proxying to the FlexSim Web Server through Apache web server. Install the FlexSim Web Server Program Download and install the FlexSim Web Server from https://account.flexsim.com Edit C:\Program Files (x86)\FlexSim Web Server\flexsim webserver configuration.txt Change the port from 80 to 8080 Start the FlexSim Web Server by double clicking flexsimserver.bat Test the server by going to http://127.0.01:8080 It should look like this: Install Apache Web Server for Windows Download Apache x64 from https://www.apachehaus.com/cgi-bin/download.plx Extract the httpd-<version>.zip file you have downloaded Go into the httpd-<version> folder you have extracted and copy the Apache24 (or Apache25) folder to C:\ Install Apache Dependencies Make sure you have the Microsoft Visual C++ 2008 SP1 package installed. You can get it here: https://www.microsoft.com/en-US/download/details.aspx?id=26368 Download the vcredist_x64.exe package and run the installation Configure Apache Open C:\Apache24\conf\httpd.conf in a text editor Look for the following lines in this configuration file and remove the # character #LoadModule proxy_module modules/mod_proxy.so #LoadModule proxy_http_module modules/mod_proxy_http.so #LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so Those modules should now look like this: LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so At the bottom of the httpd.conf file, add these 3 lines and save the file: ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ AllowEncodedSlashes On Run Apache Web Server In a file explorer or CMD line prompt, browse to the C:\Apache24\bin folder and run httpd.exe Now that you have the FlexSim Web Server proxied through Apache, you may decide you want to configure Apache to handle security, authentication and customization. Since this is out of the scope of this guide, you can find details on the Internet that can guide you to setting these customizations up. A few resources you may consider: https://community.apachefriends.org/f/ https://stackoverflow.com
View full article
This article is an updated version of a previous article by @Paul Toone. Thanks to Paul for the original, which you can read here: https://answers.flexsim.com/articles/23118/flexsim-xml-18.html In FlexSim, you can save any tree (including models and libraries) in XML format. There are many advantages to using this capability in your model development, including: Since XML is an ascii/text based format, it increases the utility of using content management and versioning software such as Subversion, CVS, Microsoft Visual SourceSafe, git, etc. to manage the development of a model. These systems automatically track a history of changes to a model or project, and for ascii/text based files, they allow you to see line-by-line change logs for the files, as well as revert line-item changes if needed. With the added benefit of versioning systems comes the ability to develop a model concurrently by different modellers using a much more stream-lined method. No more saving off individual t files from one modeller's changes to a model and loading them manually into the master model. When saving to XML format, if one modeller is working on one portion of the model, only that portion of the model file changes when saved, so when the modeller checks his changes into the versioning system's repository, another modeller can automatically merge those changes into his copy. If there are conflicts, where two modellers change the same part of the model, then the versioning system has tools that allow modellers to easily see in the XML file where the conflicts occur and quickly perform the manual conflict resolution. FlexSim also has the added ability to distribute a single model across multiple XML files. This increases your control over how to manage the model development in your versioning system, and makes conflict resolution even easier. The distributed save mechanism also opens the door for much more automated control of FlexSim. By saving small pieces of your model off into separate XML files, you can auto-regenerate one or more of those XML files outside of FlexSim, consequently changing the configuration of the model for the purpose of running different scenarios. Tutorial This tutorial will take you through saving a model as XML, and how you can configure the distributed save. Build a Simple Model First build a simple example model containing a Source, Queue, Processor and Sink. Save in XML Format Save the model, and in the save dialog, choose "FlexSim XML" from the drop-down at the bottom. Save the model as "PostOfficeXML". Now you've saved the file in FlexSim's XML format. You can view the file in a regular text editor like Visual Studio Code, Notepad++, etc. It's not necessary to know all the details of the file format. In simple terms, the primary tag in FlexSim XML is the <node> tag, representing a node in FlexSim. The node's name is described in the <name> tag, and the node's data, if applicable, is described in the <data> tag. If you plan on doing automated changes, and need a more detailed definition of the xml format, you can refer to FlexSim's xml schema (see the FlexSim install directory/help/MiscellaneousConcepts/FlexSimXML.xsd for more information). Version Management with Git At FlexSim, the development team uses Git to manage collaboration between developers. Part of development includes updating the MAIN and VIEW trees. In a similar way, you can use Git to collaboratively update the MODEL tree. The standard way to use Git is as follows: Set up a remote repository for the project. There are many online services for this, including GitHub, BitBucket, and many others. Each one has different pricing and terms of use. It is also possible that your company already runs a Git hosting service where you could create your remote repository on a company server. Once you have a remove repository, each user "clones" that repository to their local computer. Everyone "pulls" the latest changes from that repository and "pushes" their changes to that repository. To manage pushing and pulling, you can use Git on the command line. However, most users interact with Git through a client. One free one is called Git Extenstions: http://gitextensions.github.io/ By itself, version management is a deep topic. You might consider reading some tutorials to become more familiar with Git concepts: https://git-scm.com/docs/gittutorial https://github.com/git-guides https://github.com/skills/introduction-to-github https://www.atlassian.com/git/tutorials/learn-git-with-bitbucket-cloud Managing Changes with Git Let's assume you have a repository with the model file already in it. Let's make a change by moving the queue and saving the model: Once you save the model, Git can show you the changes you've made. In this case, you can see the spatialx and spatialy in your Git client: At this point, you could commit and push your changes. Others could then pull the changes into the model. Notice that git looks at changes in terms of lines being added and removed. In this case, we removed two old lines and added two new lines. Distributed Save As models get bigger and more complex, it can be helpful to split a single model file into multiple XML files. To do this you create a "FlexSim File Map" file. This is an xml file with the .ffm extension. It must be placed in the same directory as the model's .fsx file, and, except for the extension, must be given the same name as the .fsx file. So, let's create that file. Let's also create a subdirectory to put the distributed files into. Now edit PostOfficeXML.ffm in a text editor. The first thing we'll do is put the node MODEL:/Tools/ Workspace into a different file. This is something you'll probably want to do always if you're using version management. The node MODEL:/Tools/Workspace stores all of the windows open in the model, so if you're editing a model and change or reposition the windows that are open, that will change the model file. For version management this is often just static that doesn't need to be tracked, so we'll have the node saved off to a different file, and then we can just never (or at least rarely) check that file into the version management system. Specify the following code: <?xml version="1.0" encoding="UTF-8"?> <flexsim-file-map version="1"> <map-node path="/Tools/Workspace" file="distributedfiles/windows.fsx" file-map-method="single-node"/> </flexsim-file-map> Save the file map, then go back into FlexSim. Save the post office model again under the same name "PostOfficeXML.fsx". Now look in the distributedfiles directory. You'll see that it contains a new xml file named windows.fsx. This xml holds the definition of the node MODEL:/Tools/Workspace. All interaction with the model remains the same, i.e. from FlexSim you just load and save the main xml model file, but now the model is distributed across multiple files. If you look a the change to PostOfficeXML.fsx in your git client, you'll see that many lines have been replaced with a single line specifying the other file: This shows how much of the content of PostOfficeXML.fsx has been moved to distributedfiles/windows.fsx. In the file map, the main document element is the <flexsim-file-map> tag. Inside the main document element should be any number of <map-node> elements. Each <map-node> element should have a path attribute, a file attribute, and a file-map-method attribute. The path attribute should specify the path, from the main saved node, to the node that is going to be saved into the distributed file. The file attribute specifies the path, relative to the saved model file, to the distributed file that you want to save. The file-map-method should either have a value of "single-node" or "split-points". In this case "single-node" means you want to save all of the active node into windows.fsx. Now let's do an example of the "split-points" method. Let's say we want to save the Tools folder in one file, the Source and Queue into another file, and the Processor and Sink in a third file. To do this, add another <map-node> element to the file map: <?xml version="1.0" encoding="UTF-8"?> <flexsim-file-map version="1"> <map-node path="/Tools/active" file="distributedfiles/windows.fsx" file-map-method="single-node"/> <map-node path="" file="distributedfiles/Tools.fsx" file-map-method="split-points"> <split-point name="Source1" file="distributedfiles/Source_Queue.fsx"/> <split-point name="Processor3" file="distributedfiles/Processor_Sink.fsx"/> </map-node> </flexsim-file-map> Now save the file, save your FlexSim model, and again you'll see new files created in the distributedfiles directory. If a element uses the "split-points" method, then it can have sub-elements. Each split-point should have a name, defining the name of a node inside the map-node's defined node, and a file attribute defining the file to save to. The map-node has a path="" attribute. This means that it applies to the root node, or the model itself. The map-node's file attribute defines the first file to use. This configuration tells FlexSim to save all sub-nodes in the model up to but not including the node named "Source1" to the file "distributedfiles/Tools.fsx", save everything from Source1 up to but not including the node named "Processor3" in "distributedfiles/Source_Queue.fsx", and save everything from Processor3 to the end in "distributedfiles/Processor_Sink.fsx". Why Distribute? The main reason you would want to distribute your model across multiple files is for version management. It allows you to easily see what you've changed in your model by seeing what files, and consequently which parts of the tree, have changed. If you inadvertently changed one piece of the model, you can easily see that and then revert that change if needed. When multiple modellers are developing the model, one modeller can essentially confine himself to one part of the model tree, and by associating that part of the tree with a specific file, it makes it much easier to merge changes from different developers, it reduces the chance of conflicts in the merge, and makes it easier to do a manual merge if needed. Note on connections: FlexSim's XML save mechanism does have one catch regarding input/output/center port connections, as well as any other mechanism that uses FlexSim's coupling data. If you change connections in one portion of your model, it will actually change the serialized values of all connections/coupling data that occur after those connections in the tree, even if those connections were not changed. This can very easily cause merge issues if multiple modellers are changing connection data. However, if you distribute the model across multiple files, connection changes where both connection end-points are within the same file will only affect that file, and will not change other files. So if you can localize large "connection sets" into individual files, you can minimize the effect of changes and subsequently minimize merge conflicts.
View full article
Tokens and Flow Items can be very difficult to add to a chart. This is true because they don't exist on Reset, making them difficult to select. This article shows how you can use a Process Flow to allow a Statistics Collector to record a token's changing label value, and also to chart that value over time: The model for this example is attached (graphlabeldata.fsm). It is a very simple model: The Scheduled Source creates three tokens, each of which create a label called data. This label is created by choosing "Add Tracked Variable" for the value, which opens this dialog box: The reason we want the label to be a Tracked Variable is that Tracked Variables emit an OnChange event. We want to listen to that event. If you use the time interval collection method, discussed later in this article, you don't need to make the label a Tracked Variable. Each token then goes through a loop, where it waits, and then updates the value. This is meant to represent a much more complicated model, where the token travels through many activities, any of which could change its label value. For this example, the model randomly changes the value on the label. Now that we have a token and a label whose value is changing as the model runs, we can work on making a chart. We want to eventually make a Statistics Collector, but Statistics Collectors can only listen to events of objects that exist after the model has been reset. Tokens and FlowItems (along with their labels) are destroyed on reset, and so we can't listen directly to them. However, some Process Flow activities can listen to events on tokens and flowitems, and the Statistics Collector can listen to those events. For that reason, we make a second Process Flow: This flow has an Event-Triggered Source, which listens for tokens to leave the "Init Tracked Variable" activity in the first flow. When that happens, the source creates a token, and that token immediately gets a reference to the label node (note that this is different than the value of the label node). Next, the new token goes to a Start activity. The Start activity called "Log Change." This activity is just a placeholder. While you could technically live without this activity, it makes things a little clearer, as we will discuss later. Other than providing OnEntry and OnExit events, the Start activity has no internal logic whatsoever. After passing through the Log Change activity, the new token waits for the label value to change: In order to listen to this event, you can first sample a Tracked Variable in the Toolbox. This provides the OnChange event. Then you can update the Object field to the code shown above. Notice that every time this event happens, the token simply passes through the Log Change activity, and then resumes listening. When the original label value changes, it emits an OnChange event. When that event fires, the token listening to that even travels through the Log Change activity, which emits OnEntry and OnExit events. We can use these events in the Statistics Collector. The key to this technique is that we used Process Flow, which is good at listening to token and flowitem events, to generate Activity events, which can be used in the Statistics Collector. In the attached model, the first Statistics Collector is configured like this: It simply listens to the On Entry of the Log Change activity. The columns are defined as follows: The first two columns are simpler; the Time column uses the Model Date/Time option: The second column gets the ID of the token as an integer: The third column gets the current value of the Data label: Now that the Statistics Collector is set up, we can configure the chart to use this collector, and split by the Token ID. The process to record the label value every interval (rather than on every change) is very similar. The downside is that the data is less granular, but the upside is that a label doesn't have to be a Tracked Variable to be charted. The example model simply uses a Split activity to copy the data from the Event Triggered Source, and sends it to a similar listening loop: Instead of waiting for the value to change, the second token waits for a fixed time interval. A similar Statistics Collector will allow you to create the following chart: This approach works for every token created by the scheduled source. No matter how many tokens you create, each will show up on the chart:
View full article
This example model was created to clarify pre-emption of task sequences created in process flows. Specifically it will show: 1) that pre-empting of task sequences generated in a process flow does not require the token generating the sequence to be pre-empted. 2) that you can push a partially created sequence to a tasksequence list type. 3) how to use recursive calls to a sub flow. and afterwards: 4) how the milestone task can be used in a PF generated task sequence 5) the use a task type nodefunction Each queue is a member of the object process flow shown in the picture. Each generates a box along with a job to take it to another queue and pulls an initial task executer from the list of FreeTEs. The first task to travel and collect the box is preemptable and so we push the task sequence to a Premptable Job list before we add that travel task and pull it off the list when the task has completed. The rest of the task sequence is a standard load, travel and unload set of tasks. When the job is complete we know that the TE is free and could preempt another TE - which could be desired if the newly free TE is closer to the pickup queue than the one currently assigned. The CheckPreempt subflow is called with a label referencing the TE that has become free. This TE may not be the original TE that we pulled from the list of FreeTEs when the job was started, and so we discover the current TE by looking for the task sequence's owner object - done with the function findownerobject() since it does not cache the value in the same way ownerobject() does, and we could have had more than two TEs assigned to this tasksequence. Since the task sequence gets destroy I do this in the assign labels activity before we finish the ts: The Check Preempt subtask first tries to pull a preemptable job where the invoking TE is closer than the current executer of that job, and chooses the one where it is the most pronounced difference. The taskssequence list type has the distance field prepopulated and from that the two other field expressions otherDistance and howMuchCloser are derived. If no task sequence is chosen then we push the finishing TE to the FreeTEs list. If we choose a task sequence to preempt then we first label the token with the otherTE and the preempt the task sequence with a simple zero delay task. The we reDispatch the task sequence to the te that is calling Check Preempt. Since we now know that we've freed the otherTE from the task sequence it had we can invoke the Check Preempt subflow for that task executer, knowing that it will either get put onto the FreeTE list or get another task sequence and preempt another TE. This the is a recursive subflow call with the exit condition being that no preempatable tasksequence is found. Note that no token preemption from activities takes place and this is because the tasksequence is still tied to the token and its activity, which will still receive the call back when a task is complete, regardless of which task executer actioned the task. TEpreemptIfCloser1.fsm Next: Changing colors to indicate preemptable TEs and their destination - using milestone and nodefunction tasks. To better see the allocation and preempt activities we'll now change the model such that the TE color reflects the following: White when idle Gray when busy and not preemptable Matching the pickup queue's color when preemptable. First of all I defined two user commands - one to set the color of the TE (setTEcolor) and another (commandNode) to get the executable node of a user commands. This is so that I can add the fn() SetColor activity (it's a custom task of type nodefunction) ; specify the function using: commandNode("setTEcolor") and pass in the queue color as parameter 2 (parameter1 will be the TE that is executing this task). This means the user command code is just: Object te=param(1); Color col=param(2); te.color=col; The milestone task allows us to define a set of tasks that need to be repeat if the TE gets pre-empted. It's added again using the Custom Task activity and selecting the type as Milestone and has a parameter to specify the number of subsequent tasks that should be considered within the preemption range that will trigger tasks to be repeated. Since we want the Travel preemption to retrigger the setting of the color we set the range to 2. However since the token only needs to be notified when the travel task is complete that it should go on to the next process, and until then wait in the travel task we can uncheck the "Wait Until Complete" box of the Milestone and fn() SetColor activities: If you don't clear this for those two tasks we expect some misbehavior/desynchonization with the token. Finally there needs to be a resettrigger on the TE to set the color to white and a regular Change Visual activity after the pickup point is reached to set it to gray. Note I prefer the use of commands to sending messages, which would be another option, mostly because users can tell the intent of the call without having to inspect the message code. TEpreemptifCloser2.fsm
View full article
En este video van a aprender cómo se cambia el color de los FlowItems y Objetos 3D en FlexSim usando la Ventana de Propiedades y los Triggers. Para más videos tutoriales pueden acceder al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips
View full article
This article demonstrates how to use the Statistics Collector and Calculated Tables to create three utilization pie charts: a state pie chart an individual utilization pie chart a group utilization pie chart Example Model You can download that model (utilizationdemo.fsm) to see the working demonstration. The model has a Source, a Processor, a Sink, a Dispatcher, and several operators. The operators carry flow items to and from the processor, as well as operate the processor. The operators are in a group called Operators. State Pie Chart First, we need to make a Statistics Collector that collects state data for the operators. The easiest way to do that is to use the pin button to pin the State statistic for any object in the model. Use the pin button to pin a pie chart to a new dashboard. The pin button creates a new Statistics Collector, as well as a new chart. Open the properties for new Statistics Collector (double click on it in the toolbox), and change its name to OperatorStatePie. On the Data Recording tab, remove the object from the Enumerated Rows table. Using the sampler, add the Operators group (you can sample it in the toolbox). Now, when you reset and run the model, the state chart should work. Utilization Pie Chart Often, users need to combine sever states into a single value that can be used to determine the utilization of an object in the model. In order to gather this data, we can use a calculated table. Make a new Calculated Table, and give it the following query: SELECT Object, (TravelEmpty + TravelLoaded + Utilize) / Model.statisticalTime AS Busy, 1 - Busy AS NotBusy FROM OperatorStatePie This query sums the time in several states into a total, and then divides by the statistical time. Be sure to set the name of the table (the part after FROM) to the name of your Statistics Collector. Run the model for a little bit of time, and then click the Update button on the properties window. You should get a table like the following: Instead of viewing the data as just numbers, change the Display Format of each column to better represent the data. On the Display Format tab, set the Object column to display Object data. Then set the other two columns to display percentages. When you switch back to the Calculations tab, the data will be formatted: Once we get the query right, set the update mode to Always. This will updated the data in the table whenever the data is needed, including every time the chart draws. If updating the table is computationally expensive, you can use the By Interval or Manual options. Generally, a small number of rows (1-100) is small enough to use the Always mode. Regardless of the update mode, we can make a chart based on this table. In the dashboard, create a new Pie Chart. For the Data Source, select the calculated table. For the Pie Title, select the Object column. For the Center Data, select the Busy column. Be sure to include the Busy and NotBusy columns. This should show you a pie chart, comparing the operator's busy and not busy time. Group Utilization To make the final utilization chart, make a second calculated table. The query for the second table should be as follows: SELECT AVG(Busy) AS AvgBusy, AVG(NotBusy) AS AvgNotBusy FROM CalculatedTable1 Again, use the Update button to be sure the query is correct. Once it is, set the update mode to Always. Finally, you can make the pie chart for this data: Things to Try If you feel comfortable with this model, you can try a couple extra tasks, such as: Remove one of the operators from the group, reset, and run. The charts will update accordingly. Add the Processor to the group, reset, and run. The state chart should work automatically.
View full article
In version 2018 and on, you can make this chart by dragging the Throughput Per Hour by Type template from the dashboard library. If you install the template (available on the Advanced tab), you will see a Process Flow and a Statistics Collector appear in your toolbox. One of the most common questions from FlexSim users is as follows: How do I make a chart that shows the output every hour? You can make this chart in three steps. Configure the Statistics Collector First, you need a Statistics Collector. Make a new one in the toolbox (click the green plus button, select Statistics, and then select Statistics Collector). On the event listening tab, use the green plus button to add a timer event, and configure as shown here: This timer event will fire every hour (every 3600 seconds) in the model. Notice the shared label, that is storing all members of the Processors group as an array. We will use this label in the next step. Once you have configured the timer, then you need to set up the row mode for this collector. We want one row per processor, and we need to use the Processors label as the row value. Since the Processors label is an array, we will get three rows per timer event, each row corresponding to a processor. Finally, we can add the columns. The three columns are as follows: Time - use the pick list to select Model Date/Time from the Time menu Object - use the pick list to select ID of row value from the IDs menu Output - use the pick list to select Statistic by Object from the Object Statistics menu Use data.rowValue as the object value in the popup If you use the pick options to choose these options, then the storage type and display format options should be set automatically. With these three columns in place, we can watch the table populate. Reset and run the model at high speed. Every model hour, you should see a new set of rows appear, one for each processor in the group. The table will look something like this: Configure the Calculated Table The Statistics Collector table from the previous steps is close to what we want, except that the output value always increases as the model runs. But what about the output for just a single hour? To get that value, we can use a Calculated Table. Make a new calculated table, and give it the following query (in the Query field): SELECT Time, Object, ISNULL(Output - LAG(Output) OVER (PARTITION BY Object), 0) AS OutputPerHour FROM StatisticsCollector1 This query uses SQL window functions. Basically, it says that each row's value should subtract the previous row's value for the object. In addition, if that value is NULL (because it's the first row), then just use a value. If you reset and run the model, so that the collector table has at least a few rows in it, click the Update button to run the query. Notice that the Time and Object columns show numbers. This is because the Calculated Table can't infer the formatting of the column. To set the formatting, use the Display Format Tab. You may also wish the table to update every hour, with the Statistics Collector. Make the Chart Now that our data is correct, we can make a chart. Make a new dashboard, and create a Time Plot chart. Point the chart to the calculated table. Let's use the Time column for the X values, and let's use the OutputPerHour column for the Y values. In addition, make sure to split by the Object column. If the calculated table updates every hour, then running the model should create the chart shown at the beginning of the model. Here is the model used to create this chart (should work in 2017 Update 2 Beta or later; beta must be built on or after August 21, 2017). outputperhourdemo.fsm
View full article
One of the most powerful uses of a Fixed Resource Process Flow is to unify a set of machines and operators as a reusable entity. For example, many factories have several production lines. Simulation modelers often wonder what would happen if they could either open or close more production lines. If you were to open a new line, would the increased output be worth the cost? Or if you were to close a line, would the factory still be able to meet demand? These are the kinds of questions modelers often ask, and by using a Fixed Resource Process Flow, you can answer this question. This article demonstrates how to use a Fixed Resource Process Flow (or FR Flow) to coordinate several machines and operators as a single entity. The example model represents a staging area, where product is staged before being loaded on to a truck. However, most of the concepts that are discussed could be used in any Fixed Resource Process Flow. Creating a Collection of Objects When you set out to build a reusable FR Flow, it is usually best to start by making a single instance of the entity you want to replicate. Often, it is convenient to build that entity on a Plane. When you drag an object in to a Plane, it is owned by the Plane, which makes the Plane a natural collection. If you copy and then paste the Plane, the copy will have all the same objects as the original Plane. This makes it very easy to make another instance of your entity: just copy and paste the first. If the Plane is attached to an FR Flow, then the copy will also be attached to the same Flow, and will then behave the exact same way. In the example model, you will find a Plane with a processor, five queues, and two operators. This makes up a single staging area. The Plane itself is attached to the FR Flow, meaning an instance of the FR Flow will run for the Plane. It also means the Plane can be referenced by the value current . (You may find it helpful to set the reset position of any operators, especially if they wander off the plane at any point during the model run.) Using Process Flow Variables In order to drive the logic in your entity using an FR Flow, you will need to be able to reference the objects in each entity, so that they will be easily accessible by tokens. For example, if items are processed on Machine A, Machine B, and Machine C, in a production line, then you would want an easy way to reference those machines in each line. This is where Process Flow Variables come in. In the example model, you will find the following Process Flow Variables: Every staging area has a Packer, a Shipper, and a Palletizer, each referenced using the node command. Remember that current is the instance object, which is the Plane. Once these variables are in place, you could create an FR Flow like the following: If you ran this model with the Plane shown previously (complete with correctly named objects) and this flow with the shown variables, then the Packer operator would travel to the Palletizer. If you then copied the Plane, you would see that all Packers go to their respective Palletizers, as shown below: Using Variables and Resources Together Sometimes, you may want to adjust the number of operators per line, or even the number of operators in a specific role. To do this, you can again you Process Flow Variables. The sample model includes these variables: The sample model also includes these resources; the properties for the Shipper Resource are shown: Because this resource is Local, it is as if each attached object has its own version of this resource. Because it references a 3D operator, it will make copies of that operator when the model resets. The number of copies it creates will depend on the local value of the ShipperCount variable. To edit the value for a particular instance object, click on that object in the 3D view, and edit the value in the quick properties window. Disabling Entities Often, modelers simply want to "turn off" parts of their model. If that part of the model is controlled by a fixed resource flow, then you can easily accomplish this task. In the example model, you will find the following set of activities: The Areas resource is numeric, and it is global. The Limit Areas activity is configured so that any token that can't acquire the resource immediately goes to the Area Disabled sink. If the token that controls the process dies, then the area for that token is effectively disabled. The example model uses a Process Flow Variable to control how many areas can be active at a time: The Areas resource uses this value to control how many shipping areas are actually active. If this number is smaller than the number of attached objects, then some of the areas won't run. Note that the order the objects are attached in matters. The first attached area will be the first to generate a token in the Init Area source, and so will be the first one to acquire the area. Using Process Flow Variables in the Experimenter/Optimizer Currently, only Global and User Accessible variables can be accessed in the Experimenter. In the sample model, the only variable that can readily be used in an experiment is the TotalAreas variable, which dictates how many areas can be active. This allows the Experimenter or Optimizer to disable lines. To make it possible for the Experimenter or Optimizer to vary the number of Shippers or Packers per Area, the ShipperCount and PackerCount could be changed to Global Variables. You could put labels on the instance object (the Plane) that are read by the variable, like so: Then the experimenter could set the label value on the Plane object that is attached to the FR Flow. The Experimenter would set the label, which would affect the value of the Variable, which would affect how many copies of the Packer were created by the Packer resource. You could do the same for the ShipperCount variable. Sample Model The attached model (usingfixedresources-6.fsm) provides a working model that can demonstrate the principles in this article. It comes with only one Plane object attached to the FR Flow. Here are some things to try with the model: Reset and run the model to see how it behaves with a single area, where that single area has only one Packer and one Shipper. Create a copy (use copy/paste) of the Plane. Reset and run the model again to see how a second area is automatically driven by the same flow. You may need to adjust the TotalAreas variable if the second doesn't run. Adjust the number of Packers and Shippers (using the Process Flow Variables) on each Plane. Reset and run the model to see how adding more packers and shippers affects it. Create many copies of the first Plane (it is best to set the PackerCount and ShipperCount to 1 before copying). Reset and run the model, to observe the effect. Add or remove some staging areas (queues) to an Area. This model handles those changes as well (in the Put Stations On List part of the flow). Try to set up an experiment or optimization, that varies how many lines are active, and how many Packers and Shippers are in each. In general, play around with the sample model, or try this technique on your own. Other Applications This technique can be used to create production/packaging lines, complex machines, or any conglomerate of Fixed Resources and Task Executers. If you follow the methods outlined in this article, you will be able to create additional instances of complicated custom objects with ease. You will also be able to configure your model for use with the experimenter or optimizer.
View full article
Top Contributors