FlexSim Knowledge Base
Announcements, articles, and guides to help you take your simulations to the next level.
Sort by:
Hi everyone, recently @Arun Kr posted this idea to add a utilization vs. time chart to the available chart types in FlexSim. I had previously build a relatively easy to set up Statistics Collector for use in our models. I have since cleaned up the design a bit and thought to post it here, since this seems to be a commonly desired feature. utilization_vs_time_collector_24_0_fm.fsm All necessary setup is done through labels of the collector. The first three are actually identical to labels found on the default Statistics Collector behind a state bar chart. - Objects should point at a group that contains all objects the collector should track the utilization for. - StateTable is a reference to the state table that will be used to determine which state counts as 'utilized'. - StateProfile is the rank of the state profile that should be read on the linked objects (0 for default state profile). - MeasureInterval is the time frame (in model units) over which the collector will take the average of the utilization. - NumSubIntervals determines how often that measurement is actually taken. In the example image above (and the attached model) the collector measures the average utilization over the last 3600s 12 times within that interval of every 300s. Each meausurement still denotes the utilization over the complete "MeasureInterval". The graph on the left takes a measurement every 5 minutes, the one on the right every 60 minutes. Each point on both graphs represents the average utilization over the previous hour from that time point in time. - StoredTimeMap is used to allow the collector to correctly function past a warmup time, by storing the total utilized value of each object up to that point. This should no be manually changed. Since this last label has to be automatically reset, remember to save any changes made to the other labels by hitting "Apply". The collector works by keeping an array of 'total utilized time' value for each object as row labels. Whenever a measurement is taken, the current value is added to the array and the oldest one is discarded. The difference between the newest and oldest value is used to calculate the average utilization over the measurement interval. The "NumSubIntervals" label essentially just controls how many entries are kept in that array. To copy the collector into another model you can create a fresh collector in the target model. Then copy the node of this collector from the tree of the attached model and paste it over the node of the fresh collector. I hope this can help to speed up the modeling process for some people (at least until a chart like this is hopefully implemented in FlexSim) or serve as inspiration for how one can use the Statistics Collector. I might update the post with a user library version if I get to creating it (and if there is demand for it). Best regards Felix Edit: Added user library with the collector as a dragable icon to the attached files. Edit2: I noticed a bug while using the collector. Having the tracked objects enter states that are marked as "excluded" in the state table would lead to incorrect utilization values (possibly even below 0 or above 100%). Replaced the library with an updated version that fixes this. Edit3: I fixed another bug that resulted in a wrong utilization value for the first measurement after the warmup time if the object spent time in an excluded state prior to the warmup. utilization-vs-time-collector-library-20250212.fsl
View full article
This is a demo model for the new warehouse functionality found in version 2019 Update 2: warehouse-demo-model.fsm The basic premise of this model is that items of a particular type come in, and must be placed in slots for that type. Orders also come in, requiring items of a particular type, that must be retrieved from storage. The model is meant to be a general concept model. It demonstrates the use of many of the new features in 19.2, and embodies some high-level "how-to's" of warehousing that are discussed in the user manual. Most logic for the model is implemented in a process flow. The process flow logic is separated into three main categories, namely initial inventory, inbound, and outbound processes. Further, the outbound process demonstrates both random-based order generation as well as history-based order generation. Initial Inventory The model includes a Global Table of Initial Inventory. The process flow's initial inventory section reads this table, and then creates items and places them into slots based on that initial inventory. This logic relies on the Address Scheme defined in the Storage System object, and uses direct addressing to get a slot using Storage.system.getSlot(). Inbound I use the process flow to assign a slot to each incoming item. I use an Assign Labels activity called Find Slot to do this. This uses a pick list option that wraps a call to Storage.system.findSlot(). The query matches the Type of the item with the Type of the slot, and also ensures that the target slot has space to fit the incoming item. The query also randomizes the order. Randomizing the order would likely not be necessary in most situations, but it makes the demo look nice. If the Find Slot activity properly finds a slot to store the item, then I go ahead and assign the the item to that slot, and have an operator place it in the rack. Outbound I also use the process flow to generate orders, and to reserve items in the storage system for those orders. In most warehouse simulations, order generation can be driven in two ways. First, you can use random probability distributions to generate orders based on general throughput metrics. Second, order generation can be based on historical data. This model gives an example of each method. In the random method, orders are generated randomly every ~30 seconds. Each order includes a number of SKU line items (again, random) and each line item includes a quantity of that SKU (again, random). Order tokens spawn line item tokens, which in turn spawn tokens associated with individual picks (the Fill Out Individual Picks process). For each pick, the token finds an item in storage that matches the target SKU. This is an Assign Labels activity (Find Item by SKU) with a pick option that wraps a call to Storage.system.findItem(). It finds an item that matches the required type, again using a query. Once the item is found, it makes reserves the item as "outbound" by assigning the Storage.Item.assignedSlot property to null (Set to Outbound activity). This ensures that no other process will find that same item for picking. The history-based order generation process uses much of the same functionality as the random-based, but it instead reads an "OrderHistory" table to determine when orders are started and what those orders contain. The OrderHistory table represents a simplified format for what you would likely see in a standard orders table. First, the process flow creates a transformed table that aggregates each order into a single row (this could technically be done as post-import code, but I do it in the process flow for visibility). Then the process flow loops through that transformed table, waiting for the start time of each order, then spawning that order. Custom Rack Visualization I have also customized the visualization of the racks. I have added a text to the front (and bottom on the floor) of a rack slot that will show the address of that slot. Further I've given the text a background that is color-coded to the SKU that that slot is designated to store. This was all done through the Storage System's Visualizations tab. I customized the Rack visualization.
View full article
This model and library will allow you to produce a heat map of anything moving in the model - including AGVs and Flowitems. To add this to a model is simply a matter of : 1) Load the attached user library 2) Add objects to the group HeatMapMembers 3) Drop a heat map object (cylinder) into the model - reset and run. With this updated version you can now you can now have multiple mapper objects in the same model showing different groups - made easier by the addition of a 'groupName' label on the mapper. You can easily change the height at which the map is draw using the 'zdraw' label and alter the sampling interval and grid size using the 'heatInterval' and 'resolution' labels. The resolution is the number of divisions per model length unit. In the example model set to metres, a value of 2 gives 4 divisions per square metre. Currently non-flowitems are set to ignore time when the object is in an idle state. HeatMapAnything.fsl HeatMapAnything.fsm
View full article
The attached model contains a basicTE to mimic some operations of a Tower Crane. You should be able to use it like any other task executer. Labels on the crane allow the speeds and operating heights to be altered. To change the jib/beam length use the label parameter and it will apply at reset. Similarly, to change the height for now just change the tower height and press reset to have the rest attached at the correct height. TowerCrane_basicTEexample.fsm Update: Added a user library that will scale the crane based on the model units. Also changed some labels so that rotational speed is specified there and the jib/beam now uses the object properties for max speed and acceleration. TowerCrane.fsl
View full article
Engage with the FlexSim community here on the FlexSim forum boards. Check out our learning resources. Customers with current licensing can request direct technical support from their FlexSim representative, via phone, email, web meeting, or support ticket.  
View full article
Using Mixamo Fuse, Mixamo, and (optionally) 3ds Max, you can create animated characters for use with FlexSim. For modifying the existing operators' animations, see Adding More People Animations From Mixamo Character Creation You can customize a character’s look using Mixamo Fuse: After creating the character, you can save your character as a .fuse file in case you want to edit it again later in Mixamo Fuse. Character Rigging From Mixamo Fuse, you can press the Animate button in the upper-right corner to upload the character to Mixamo’s website to apply animations. You will need to log in with a Mixamo account. You will be presented with the Mixamo Auto-Rigger. After a few moments, your character will appear in a view with a default animation applied. Choose whether to enable facial blendshapes and select the appropriate level of detail for the skeleton, such as 2 Chain Fingers, and then press Update Rig. The auto-rigger will reapply and the view will be updated with the newly rigged character. If the settings are good for your needs, press the Finish button. Character Animating Once the character is rigged, you can select it on the Mixamo website and press Find Animations to find animations to apply to the character. You can search for animations and add them to packs. For each animation, you can adjust various parameters to fit your character to that animation better. Exporting the Character and Animations Once you have applied and customized animations for your character, you can add them to the Cart and checkout in order to have them available for download. After checking out, you can select an animation or animation pack to download on the My Animations page. You can apply the same animation sets to different characters using the Change Character button after selecting the animation set. Press the Queue Download button to tell the Mixamo server to create the files for download. You want to use the FBX format so that we can edit it with 3ds Max. The pose can probably be T-Pose or Original because the Original is probably a T-Pose anyway. I use 30 Frames per Second for consistency and ease of timing animations. Keyframe Reduction didn’t seem to do anything in my tests, and you can do it yourself with more control in 3ds Max so I use “none.” If you exported multiple animations, the output will contain one fbx file that is the character shape and several other fbx files that contain only animation data for each animation. (Update) Important Note: Starting with FlexSim 2017 Update 2, the rest of the steps in this document are no longer necessary. FlexSim 17.2 added support for embedded textures, specular maps, and gloss maps that fix the material issues directly without modification in 3ds Max. FlexSim 17.2 also added support for assigning animations to a shape from multiple files, so you can import the mixamo output files directly without having to merge all the animations into a single file. The following steps can be used if you want to optimize or otherwise modify the shape files, but are no longer required. Preparing the Character using 3ds Max Import the character shape .fbx file into 3ds Max and save the file as a .max file. You will want to resave as a .max file at different points with different file names as you progress so that you can easily revert back to a previously known working point if something messes up. The FBX Import window will show you lots of options you can modify. The defaults seemed to work fine. I tried messing with the “Units” scale factor and file unit conversions, but I was unable to find any options that improved the scaling in FlexSim. Ultimately, I found it best to just use Automatic and sort out the units myself using FlexSim’s shape factors. Export the file as fbx and bring it into FlexSim to see how it looks. Again, the default options seem to work fine. In FlexSim, the size will probably be 100x too big (no matter what “unit conversion” options I seemed to set in the import/export options). Resize the character to be 100x smaller. Fixing the Materials in 3ds Max When you first bring in a Mixamo Fuse character into FlexSim, it will probably look shiny and dark. That can be fixed in 3ds Max. Open the Material Editor using the button Double-click each material in the list of Scene Materials on the left to add it to the middle view for editing. Press the “Lay Out All – Vertical” button to arrange the nodes nicely in the middle view. When you imported the .fbx file, 3ds Max should have automatically unpacked its textures into a directory called CharacterFileName.fbm. When you double-click on a texture node in the Material Editor view, you should be able to see and edit the path where it is referencing each texture in the Material Parameter Editor pane on the right. Removing the shininess Each material has Ambient, Diffuse and Specular colors, in addition to any textures that are mapped to various channels. The operator is shiny everywhere because each material’s Specular color is set to White and their specular maps are mostly black. Since FlexSim doesn’t currently read specular maps and gloss maps, the white specular color is being applied everywhere instead of the black color from the specular maps. To fix it, click each specular map and gloss map in the center view and delete them with the Delete key. Then double-click on each main material and set its specular color to black to turn off the shininess. Save the .max file with a new name, and re-export the .fbx file to test it in FlexSim. The shininess should be gone. Brightening the darkness FlexSim uses Assimp to read fbx files. Assimp’s fbx importer is setting a default Ambient color value of dark gray to the materials instead of leaving it off. This is why the character looks darker than he should. To fix it, we can simply specify an ambient color of white on each material. For each material, set the ambient color to white and check the Ambient Color box under Maps: Save the .max file with a new name, and re-export the .fbx file to test it in FlexSim. The darkness should be fixed. Setting local texture paths Based on the procedure above, the texture paths are likely to be absolute paths when you export the fbx file. You can fix that by saving the .max file, the .fbx file, and the textures all in the same directory. In Windows Explorer, copy all the useful textures from the CharacterName.fbm directory into the same directory where you have been exporting the .fbx file. Then also save your .max to that same directory. Once everything is in the same directory, you can reselect the texture file paths for each material so that they are pointing at these files instead of the other files. Then, when you export the fbx file, each texture’s path will be relative so you can copy all the necessary files onto any computer and use them correctly. After changing all the materials, export the fbx file again. You should be able to copy the fbx file and all its textures into a directory on a different machine and have them display properly. Save your .max file with all the updated materials. Making part of the object change color To make a material show the FlexSim color, append _fsclr to its name in 3ds Max: Configuring Animations With the character loaded, you can import the fbx files with just animation data, and it will automatically apply that data to the shape. The slider at the bottom controls the currently applied keyframe. The buttons in the bottom corner play the animation, pause, or step between keyframes. Animation data can be stored beyond the relevant range. You can open the Time Configuration dialog by pressing the button in the bottom corner. In that dialog, you can specify the speed of the playback and set a Start Time (keyframe number) and End Time for the animation you want to see in the viewport when you play. This dialog is only changing the playback options in the software; the actual data outside the specified range is still preserved. You may need to adjust these values as you import/modify/combine animations into one Timeline. Keyframes If you push the button on the top toolbar, it will open the Track View – Curve Editor. You can edit and modify Keyframes through this view. Click in the left pane and press Ctrl-A to select everything in the model. You should see keyframes appear if you have animation data loaded. Sometimes this window doesn’t refresh correctly and you can’t find keyframes. You can try closing it and reopening it, or pressing the and buttons to re-center the view on the keyframes in the configured time range. To edit the animations, you need to be able to see the keyframes in this view. You can drag the current frame by dragging the yellow bar around. You can zoom by scrolling the mouse wheel. Ctrl-mouse wheel will zoom the timeline (X-axis) without zooming the key values (Y-axis). You can select keyframes by dragging a rectangle around them. You can delete selected keyframes with the Delete key. You can move keyframes by dragging them. If you hold Ctrl while dragging them, then it will force the movement to be only along the X-axis and not along the Y-axis so that you don’t actually edit the keyframe’s values, just its time. You can duplicate keyframes by holding Shift and dragging them. Again, holding Ctrl will prevent the new keyframes’ values from shifting by holding the Y-axis still. You can scale keyframes by selecting them, putting the key keyframe indicator on the first keyframe in the selection, picking the menu option Edit > Transform Tools > Scale Keys Tool, and then clicking and dragging on one of the selected keyframes to scale all of the selected keyframes towards the yellow bar. This can be helpful to sync the timing of animations, such as walking empty vs. walking loaded. Preparing Animations for Merging When you load an animation into the file, the Track View – Curve Editor will show you the individual components that have keyframe animations. Different animations may affect different components and the interpolation of transformation information between keyframes may get messed up when you merge different animations that affect different components together into one timeline. To fix this, you can stamp a keyframe at the beginning and end of each animation that stores each component’s position at that point. You can also duplicate the first and the last keyframes to make it easier to specify perfectly-repeating animation clips in FlexSim later. Stamp a keyframe by moving the current frame (yellow line) to the keyframe you want. Then select all within the Track View – Curve Editor to select every component. Finally, press the Set Keys button ( ) at the bottom of the main 3ds Max window to store each component’s value as a keyframe. Duplicate keyframes by holding Shift and dragging them. Also holding Ctrl will prevent the new keyframes’ values from shifting by holding the Y-axis still. Merging Multiple Animations into One File To merge multiple animations into one file, you need the export each of the animations as an animation file (XML Animation File (*.xaf)). Then you can import each of these animations into a specific range of keyframes in the timeline. From the .max file without any animations loaded, import one of the fbx animation files. It will automatically apply it to the shape. Click in the Perspective view and press Ctrl-A to select everything. Then prepare the animation using the information above. Lastly, select the menu option Animation > Save Animation to save the animation as an XML Animation File (*.xaf). Reload the .max file without any animations and repeat this process for each animation you want to merge. Reload the .max file without any animations loaded. Click in the Perspective view and press Ctrl-A to select everything. Then select the menu option Animation > Load Animation to load an animation into the timeline. On the right side of the Load XML Animation File dialog, you can specify the keyframe number at which the animation should be inserted. Do not insert the animation at keyframe 0; make sure that keyframe 0 is the bind T-pose. This will be important later if you want to edit the mesh without breaking the skeletal rigging. You also need to be sure to specify Absolute and not Relative. Select the animation you want to import, specify the keyframe you want to insert at, and then press Load Motion to load the keyframes into the animation. Do this for each animation file you want to merge. Use the Track View – Curve Editor to determine the keyframe at which to insert each subsequent animation. Save your .max file with all the merged animations to a new file. Modifying the Mesh without Breaking the Rigging Sometimes you may want to modify the mesh after you have applied animations. I can’t guarantee that this always works perfectly, but there is a way to make some tweaks even after animations have been applied. Before trying to edit a skinned mesh, be sure to save your work so you can revert back to a clean state if something don’t work correctly. First, you need to be sure that the bind pose is frame 0 and that you are set to frame 0 on the timeline. Second, click on a mesh in the scene and click on the Modifiers tab. You will probably see an Editable Poly with a Skin modifier applied. If you click on the Skin modifier and expand the Advanced Parameters, you will see an Always Deform checkbox. If you clear the Always Deform box, you can then click on the Editable Poly and modify it. You then need to recheck the Always Deform box on the Skin modifier once you are done making edits. Ensure that your animations still work after making edits. In my tests with the operator shape above, when I cleared Always Deform, the mesh moved up slightly. This made the Tops mesh not line up correctly with the Body mesh. To fix that, I cleared and immediately checked Always Deform for each mesh (each mesh then moved up the same amount). Then I verified that the animations still worked and that the arms still lined up correctly with the shirt. Reducing Polygon Count 3ds Max has features for reducing the polygon count of an object. You can apply these modifiers without breaking the rigging by following certain steps. You can display the polygon count of your shape by clicking in the Perspective view and pressing the 7 key on the keyboard. As stated above, before modifying the mesh, clear Always Deform on the Skin modifier. To reduce the polygon count, you can use the ProOptimizer modifier. After clearing Always Deform, click on the Editable Poly and then select ProOptimizer from the Modifier List to add it to the stack between the Editable Poly and the Skin modifiers. In the Optimization Options, be sure to check Keep Textures. Then press the Calculate button. Then you can specify a Vertex % and it will start to remove vertices from your mesh. After applying the ProOptimizer modifier, the normals on your mesh might be messed up. You can recalculate the normals based on a crease angle using the Smooth modifier. Apply the Smooth modifier after the ProOptimizer and before the Skin modifier. Check the Auto Smooth box and specify a Threshold. You can use a value of 89 to get a very smooth surface, only applying a crease to angles that are greater than 89 degrees. After making these changes, be sure to recheck Always Deform on the Skin modifier, and test your animation to make sure the rigging is all still working properly.
View full article
The attached model contains functionality to depict the item flow as a 3D map using a FlowMapper3D Object (cylinder) and an associated Object Process Flow. Additionally a 'kpi' label on the object gives an indication of layout performance to which you can link and observe as you interact/experiment on the layout. To set this up in your model you'll need to add a Group of objects whose entry events will be used by the mapper - calling that Group "FlowMapperObjects". Then you'll need to add a ColorPalette called "HeatPalette". Finally you'll want to copy the FlowMapper3D object and the FlowMapperProcess to your model. Note that there is a boolean label 'showPercents' on the FlowMapper3D object to tell it whether to show percentage text or the number of flowitems for each location pair. 3DFlowMapper.fsm
View full article
Attached is an example model and user library comprising commands to return an array of objects whose bounding boxes intersect, and a Collision Detection object to drop into your model. The Collision Detection has a ticker interval label to adjust the frequency of checks and will switch the colliding objects to selected. It looks for two groups - "Obstacles" containing static objects in the scene (which may be overlapping and not recorded as collisions) and "Colliders" which are the objects navigating the scene and should be checked for intersecting bounding boxes. In the example model I'm adding the flowitem when it is created using Group("Colliders").addMember(item) The detector code is on its FlexScript label, 'analyseScene', which is first scheduled to run by the object's reset trigger. collisionDetection3.fsm BBCollisionDetection2.fsl
View full article
FlexSim 2022 introduced a Reinforcement Learning tool that enables you to configure your model to be used as an environment for reinforcement learning algorithms. That tool makes connecting to FlexSim from a reinforcement learning algorithm easier, but that tool is not absolutely necessary for this type of connectivity. The same socket communication protocols that are used by that tool are available generally in FlexScript. Attached (ChangeoverTimesRL_V22.0.fsm) is the FlexSim 2022 model that you build as part of the Using Reinforcement Learning documentation that walks you through the process of building and preparing a FlexSim model for reinforcement learning, training an agent within that model environment, evaluating the performance of the trained reinforcement learning model, and using that trained model in a real production environment. Also attached (ChangeoverTimesRL_V6.0.fsm) is a model built with FlexSim 6.0.2 from 2012 that does the exact same thing, but with custom FlexScript user commands instead of the Reinforcement Learning tool. You can use this model with the example python scripts and FlexSim 6.0.2 in the same way that you can use the other model with those same scripts in FlexSim 2022. I'm providing this FlexSim 6 model as an example that demonstrates how you can communicate between FlexSim and other programs. The Reinforcement Learning tool certainly makes this type of communication easier and simpler, with a nice UI for specifying RL-specific parameters, but the fundamental principles of how this works have been available in FlexSim for many years using FlexScript. Hopefully this example can help teach and inspire those who wish to control or communicate with FlexSim from external sources for purposes other than just reinforcement learning. FlexSim is flexible, and the possibilities are endless.
View full article
In addition to the animations that are on the Operator and Person flowitem by default, you can easily download and add more animations from Mixamo. Below are the steps to do that. If you want to create new characters with animations, see Bone Animations 1. Download the attached two files: FlexSim_Operator.fbx and FlexSim_F_Operator.fbx. These are the versions of the male and female operator shapes as exported from Mixamo originally, before they were modified and optimized in 3ds Max. 2. Log into your account at https://www.mixamo.com and press the Upload Character button in the Characters section of the site. 3. Drag the FlexSim_Operator.fbx or FlexSim_F_Operator.fbx shape from step 1 onto the window that appears. A progress bar will appear as the shape uploads to Mixamo's server. Once it is finished uploading, press the Next button: 4. On the Animations section of the site, select an animation you want to apply. Adjust the parameters as desired and press the Download button: 5. Select "Without Skin" in the Skin section and press the Download button. This will download just the animation file rather than the animation, the mesh, and the bones. The mesh and bones are already in the software so you only need the animation if you are editing the standard male or female operator shapes. 6. In FlexSim, edit the Animations for an Operator or a Person flowitem: 7. In the Animations and Components window, select Edit Animation Clips: 8. In the Animation Clips window, press the Plus button to add the animation you downloaded in step 5 above. Optionally, you can edit the animation clip's name and press Apply. You can also edit the clip's length or split the animation into multiple clips using this window. 9. Close the Animation Clips window when you are done. 10. In the Animations and Components window, add a new Animation and name it. Then add an Animation Clip to that animation. Then select the clip and set its Animation and Clip values to the animation/clip set in step 8: 11. Now you can use the new animation in your model on this operator or person flowitem.
View full article
In the attached model we use a Timetable and two MTBF/MTTR objects to define Schedule Loss, Availability Loss (breakdowns) and an element of Performance loss due to short stops (state Down). The processor sends 'bad' items to port 2 based on the send to percentage which account for QualityLosses. The processor's 'best' processing time per part (5 seconds) is stored as a label, while the processing time itself is a triangular distribution with the minumum as 5 seconds - so it also contributes to performance loss. When the Type of the item changes a setup time occurs which is the final contributor to performance loss. Two state profiles were added to the processor - one to track production time and another for availability. An object process flow on the processor detects production profile state changes (between On and off shift) and regular Flexsim state changes and determines the availability state that should prevail. A user command getOEEstat is used to access the values which it calculates on demand and stores in a label on the processor called statsMap. The syntax for this command is: getOEEstat(myMachine,"OEE") The list of stats: "ScheduleLoss","AvailabilityLoss","PerformanceLoss","QualityLoss","IdealProdTime","AvailabilityRatio", "QualityRatio","PerformanceRatio", "IdealProdTime", "RunTime", "OEE", "TEE". A group was used to indicate which objects have their OEE tracked, and a stats Collector reads the group members and adds rows at reset. Finally Performance Measures were added for the stats for processor 1. Processor_OEE_2.fsm 2023-08-22 Update: Added 'TEE' stat.
View full article
At the current release state 2024.2.2 FlexSim does not provide a tool to export an animated USD stage of a model. There is a way around though by recording the FlexSim model connected to a NVidia Omniverse live session using one of the tools in NVidia’s USD Composer, the Animation Stage Recorder. This is a step-by-step tutorial to guide you through the process. It is assumed, when you are interested in this you already know how to connect FlexSim to a live session in NVidia Omniverse. This may not be ther perfect way to go, but it worked for me. Record_Animated_Stage_from_FlexSim.pdf
View full article
FlexSim 2018 includes functionality for creating a custom table view GUI using a custom data source from within FlexSim. This article will go through the specifics of how to set this up. The Callbacks Custom Table Data Source defines callbacks for how many rows and columns a table should display, what text to display in each of the cells, if they're read only etc. If you require further control of your table you can use a DLL or module and sub class the table view data source in C++. An example of how this data source is used can be seen in the Date Time Source activity properties. In the above table, the data being used to display both of these tables is exactly the same. The raw treenode table data can be seen on the right. The table on the left is displaying the hours and minutes for each start and end time rather than the the start time and duration in seconds. In FlexSim 2018, there are a number of tables that currently utilize this new data source. They can be found at: VIEW:/modules/ProcessFlow/windows/DateTimeArrivals/Table/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/StatePieOptions/SplitterPane/States/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/CompositeStatePieOptions/SplitterPane/States/Table VIEW:/pages/statistics/ChartTemplateProperties/tabcontrol>variables/tabs/StateBarOptions/SplitterPane/States/Table Copying one of these tables can be a good starting point for defining your own table. The first thing you have to do in order for FlexSim to recognize that your table is using a custom data source is to add a subnode to the style attribute of your table GUI. The node's name must be FS_CUSTOM_TABLE_VIEW_DATA_SOURCE with a string value of Callbacks. Set your viewfocus attribute to be the path to your data. This may just be a variable within your table. It's up to you to define what data will be displayed. If you want to have the default functionality of the Global Table View, you can use the guifocusclass attribute to reference the TableView class. This gives you features like right click menus, support for cells with pointer data (displays sampler), tracked variables (displays edit button) and FlexScript nodes (displays code edit button). If you're going to use this guiclass, be sure to reference the How To node (located directly above the TableView class in the tree) for which eventfunctions and variables you can or should define. At this point you're ready to add event functions to your table. There are only two required event functions. The others are optional and allow you to override the default functionality of the table view. If you choose not to implement the optional callback functions, the table view will perform its default behavior (whether that's displaying the cell's text value, setting a cell's value, etc). These event function nodes must be toggled as FlexScript. The following event functions are required: ---getNumRows--- This function must return the number of rows to display in the table. 0 is a valid return value. param(1) - View focus node ---getNumCols--- This function must return the number of columns to display in the table. 0 is a valid return value. param(1) - View focus node The following event functions are optional: ---shouldDrawRowHeaders--- If this returns 1, then the row headers will be drawn. The header row uses column 0 in the callback functions. param(1) - View focus node ---shouldDrawColHeaders--- If this returns 1, then the column headers will be drawn. The header column uses row 0 in the callback functions. param(1) - View focus node ---shouldGetCellNode--- Allows you to decide whether you want to override the table view's default functionality of getting the cell node. param(1) - The row number of the displayed table param(2) - The column number of the displayed table If shouldGetCellNode returns 1 then the following function will be called: ---getCellNode--- Return the node associated with the row and column of the table. param(1) - The row number of the displayed table param(2) - The column number of the displayed table ---shouldSetCellValue--- Allows you to decide whether you want to override the table view's default functionality of setting a cell's value. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldSetCellValue returns 1 then the following function will be called: ---setCellValue--- Here you can set your data based upon the value entered by the user. param(1) - The row number of the displayed table param(2) - The column number of the displayed table param(3) - The value to set the cell ---isCustomFormat--- Allows you to decide whether you should define a custom text format for the cell. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If isCustomFormat returns 1 then the following functions will be called: ---getTextColor--- Return an array of RGB components that will define the color of the text. Each component should be a number between 0 and 255. [R, G, B] for example red is [255, 0, 0]. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The text to be displayed in the cell param(5) - The desired number precision ---getTextFormat--- Return 0 for left align, 1 for center align and 2 for right align. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The text to be displayed in the cell param(5) - The desired number precision ---shouldGetCellColor--- Allows you to decide whether you should define a color for the cell's background. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldGetCellColor returns 1 then the following function will be called: ---getCellColor--- Return an array of RGB components that will define the cell's background color. Each component should be a number between 0 and 255. [R, G, B] for example red is [255, 0, 0]. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table ---shouldGetCellText--- Allows you to decide whether you want to override the table view's default functionality of getting a cell's text. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table If shouldGetCellText returns 1 then the following function will be called: ---getCellText--- Return a string that is the text to display in the cell. param(1) - The cell node as defined by getCellNode param(2) - The row number of the displayed table param(3) - The column number of the displayed table param(4) - The desired number precision param(5) - 1 if the cell is being edited, 0 otherwise ---shouldGetTooltip--- Allows you to decide whether a tooltip should be displayed when the user selects a cell in the table. param(1) - The cell node as defined by getCellNode param(2) - The row number of the selected cell param(3) - The column number of the selected cell If shouldGetTooltip returns 1 then the following function will be called: ---getTooltip--- Return a string that is the text to display in the tooltip. param(1) - The cell node as defined by getCellNode param(2) - The row number of the selected cell param(3) - The column number of the selected cell ---isReadOnly--- Return a 1 to make the cell read only. Return a 0 to allow the user to edit the cell. param(1) - The row number of the displayed table param(2) - The column number of the displayed table
View full article
Sometimes data exists in Google Sheets that needs to be brought in to FlexSim. There are multiple ways to do this, discussed in this article. Copy and Paste This is the easiest method to get data from Google Sheets into FlexSim. Here's how it works: Open the desired sheet in your browser Click the top-left corner to select everything. Copy the data (use ctrl-C) Open FlexSim Create a Global Table if you haven't already Ensure the number of rows and columns in the Global Table is large enough to hold the pasted data. Click on the column header for the first row in the Global Table. Paste the data (use ctrl-V) Pros: Quick, easy Cons: Need to resize the global table correctly beforehand, repeat entire process if data changes. Export/Import via CSV This is also any easy method to get data. Here are the steps: Download your sheet as a csv file. In FlexSim, use the importtable() command to dump the csv into the global table. For example: importtable(Table("GlobalTable1"), "data.csv", 1) You could add this code to your model's OnReset trigger if desired. Pros: Quick, table sized to csv data automatically Cons: Repeat downloading csv if the data changes. Export/Import via XLSX You can also download a google spreadsheet as an Excel file. Then you can use the Excel importer as normal. Pros: Quick, table sized to data automatically, many options for configuring Cons: Repeat downloading xlsx file if the data changes Import via Python This method is more advanced and requires some configuration for the model and your Google account. Once complete, however, changes can be pulled in automatically without any manual steps. Follow the Sheets quickstart for python found here: https://developers.google.com/sheets/api/quickstart/python Following this guide walk you through creating a Google Cloud Project and creating credentials for that project. In addition, consider using this modified python file instead. This file creates a get_values method that the model can call, and that method is also called from main(), so it's easy to test in a python debugger: import os.path from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.errors import HttpError # If modifying these scopes, delete the file token.json. SCOPES = ["https://www.googleapis.com/auth/spreadsheets.readonly"] # The ID and range of a sample spreadsheet. SAMPLE_SPREADSHEET_ID = "----- add your sheet's ID here -------------" SAMPLE_RANGE_NAME = "A1:B" def get_values(): """Shows basic usage of the Sheets API. Prints values from a sample spreadsheet. """ creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists("token.json"): creds = Credentials.from_authorized_user_file("token.json", SCOPES) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( "credentials.json", SCOPES ) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open("token.json", "w") as token: token.write(creds.to_json()) try: service = build("sheets", "v4", credentials=creds) # Call the Sheets API sheet = service.spreadsheets() result = ( sheet.values() .get(spreadsheetId=SAMPLE_SPREADSHEET_ID, range=SAMPLE_RANGE_NAME, valueRenderOption="UNFORMATTED_VALUE") .execute() ) values = result.get("values", []) return values except HttpError as err: return [] def main(): values = get_values() if not values: print("No data found.") return for row in values: print(row) if __name__ == "__main__": main() Save the above script next to your model. Create a user command in your model. Format the user command for python and enter the file name and method name. It might look something like this: /**external python: */ /**/"sheets"/**/ /** \nfunction name:*/ /**/"get_values"/**/ The return type of the command should be var which means any Variant type. Use code like the following to clone the data to a global table: Array values = getValues(); // call the user command. Array colHeaders = values.shift(); for (int i = 1; i <= values.length; i++) { Array row = values; row[0] = nullvar; } Table(values).cloneTo(Table("GlobalTable1")); Add the above code to a reset trigger. Pros: automatic once complete, easy to keep data up-to-date Cons: requires complicated setup, some python coding. The script could be adjusted to download additional ranges, and then return all data at once, but that requires some code ability. Import via HTTPS Google recommends you use a client library to access its APIs. However, it is entirely possible to use HTTPS requests instead. This could all be done from FlexScript, with no additional installations required. Pros: done all from FlexScript, no extra installs Cons: very technical Conclusion There are several ways to extract data from Google Sheets into FlexSim. Each has pros and cons. Choose the one that best fits your circumstances. Good luck!
View full article
If you have a contiguous conveyor network you can just route items using Conveyor.sendItem() and FlexSim will guide the item to the destination, passing through inline and side transfers as required for the shortest path. If between some conveyors you use exit and entry transfers, perhaps to easily add elevators and shuttles as transports between them - then you'll normally be faced with adding logic to figure out which exit transfer to go to and which port to take from that transfer - and in a large model that logic can be extensive and hard to maintain. The attached model and library provides commands for automated routing through multiple conveyor sub-sections connected through exit/entry transfers, to conveyor points and to connected fixed resources. This means that you may no longer have to write sendTo code with case statements on each exitTransfer to determine which port an item should exit through – nor possibly need to have decision points with case logic to decide the destination for Conveyor.sendItem(). In the example model three sources create items with random destinations which are routed through the conveyor system, transfers and port automatically to arrive at the correct destinations – some of the ports having transport to perform the move. To make this work in any model you should load the user library which will auto-install a set of user commands and a General Process Flow. The first step is to run the user command ‘createAllTravelMaps()’ which will calculate all the reachable destinations (decision points, stations, pes, attached fixed resources and transfers) from all the conveyor points and entry/exit transfers) along with estimates of the conveytime (from the conveyor class). This information consolidated to create the shortest routes and is stored in a label ‘travelMap’ on each decision point, station, pe and transfer. To make use of the travelMap data there are three additional user commands supplied that are intended to be used directly by the modeller: getNextConveyPoint(thispoint, destination) – returns the next point to send an item to from this point in order to ultimately reach the destination. getConveyExitPort(exitTransfer, destination) – returns the port through which an item should exit the exitTransfer in order to reach the destination. getConveyItemsNextConveyPoint(item, destination) – returns the next point to which an item should travel to reach the destination from its current position on a conveyor. The simple process flow in the example and library is set to listen to the Group members of EntryTransfers and ExitTransfers in order to lookup the ‘destination’ label and either sends the item to the next point or in the case of the exit transfers, overrides the sendTo port with the value from the map. I’ve added some documentation to the user commands which you can access easily via the command helper: ConveyorTravelMaps_0.3.fsl ConveyorTravelMapExample.fsm You may find createTravelMaps() takes a while which is why a progress bar has been added. You may not need all points to be evaluated exhaustively so the option to pass in a flag indicating to only start evaluation from Entry Transfers is given, which will create somewhat incomplete maps for intermediate points. A future refinement would be to account for transport time from exit transfers either by recording the times or providing port list with the expected times. Clearly if you make changes to your transfer positions or conveyor layout you should rerun createAllTravelMaps.
View full article
Good practice to reduce variance when experimenting is to separate streams of things that might vary in the model so that the random sampling is independent. An example might be that you have a number of processors who are members of the same breakdown profile (MTBF/MTTR object) where the individual breakdowns are dependant on the state of the processor. If during one scenario a processor is used more than before then it may sample the duration and next breakdown earlier, and therefore change the sequence with the other machines sampling of breakdown times, increasing variance. This is because the default setting for the MTBF time fields are using 'getstream(current)' - which means a single stream for the MTBF object, shared across all members. You could try to change this in the MTBF by using 'getstream(involved)' where 'involved' refers to the breakdown member machine. This causes other problems since if you're sampling processing times using the machine's stream too, then the amount of items processed will again change the breakdown times samples. You may judge this to be acceptable, but in a ideal world you'd still want separate streams and may want multiple streams for setup, processing, breakdowns, or subsystem failures. One way to accomplish this is by changing the way getstream() works such that it can generate a stream for any value you pass to it. That might be an object, as the current getstream() accepts, or it could be the string name of the object or it's path. It could also be an array which then opens a number of possibilities: In a breakdown you could replace getstream(current) with getStream([current,involved])   //generates a unique stream number for the MTBF/machine pair* In an Object Process Flow you could replace getstream(activity) with: getStream([current,activity]) // generates a unique stream for the instance and activity pair and works for the general process flow too. For a processing time on a processor instead of getstream(current) you could use getstream([current,"Processing"]) and getstream([current,"Setup"]) to generate two seperate sampling streams. The attached library contains an auto-installing user command that overrides getstream() to provide this functionality. The stream values save with the model. getStream-byvariant3.fsl * This implementation does have some limitations since during an experiment it does not communicate back the master model when trying to create new streams. For this reason you'll want to try and have all possible streams set up before running an experiment or avoid the type of actions that dynamically create the requirement for new streams - so that might be keeping all possible fixed resources and task executers, and hiding/removing them from groups rather than destroying them as the OnSet options of the parameters table do currently. Alternatively if you consistently name the dynamically created instances then the MTBF stream expression could be: getStream([current, involved.name]) Update: I've edited this post and library to use getStream (capital 'S') since the override parameter (var thing) doesn't stay in place and eventually causes FlexScript build errors. So with the updated library you'll need to find/replace from the model tree 'getstream' with 'getStream'.
View full article
In version 2018 and on, you can make this chart by dragging the Throughput Per Hour by Type template from the dashboard library. If you install the template (available on the Advanced tab), you will see a Process Flow and a Statistics Collector appear in your toolbox. One of the most common questions from FlexSim users is as follows: How do I make a chart that shows the output every hour? You can make this chart in three steps. Configure the Statistics Collector First, you need a Statistics Collector. Make a new one in the toolbox (click the green plus button, select Statistics, and then select Statistics Collector). On the event listening tab, use the green plus button to add a timer event, and configure as shown here: This timer event will fire every hour (every 3600 seconds) in the model. Notice the shared label, that is storing all members of the Processors group as an array. We will use this label in the next step. Once you have configured the timer, then you need to set up the row mode for this collector. We want one row per processor, and we need to use the Processors label as the row value. Since the Processors label is an array, we will get three rows per timer event, each row corresponding to a processor. Finally, we can add the columns. The three columns are as follows: Time - use the pick list to select Model Date/Time from the Time menu Object - use the pick list to select ID of row value from the IDs menu Output - use the pick list to select Statistic by Object from the Object Statistics menu Use data.rowValue as the object value in the popup If you use the pick options to choose these options, then the storage type and display format options should be set automatically. With these three columns in place, we can watch the table populate. Reset and run the model at high speed. Every model hour, you should see a new set of rows appear, one for each processor in the group. The table will look something like this: Configure the Calculated Table The Statistics Collector table from the previous steps is close to what we want, except that the output value always increases as the model runs. But what about the output for just a single hour? To get that value, we can use a Calculated Table. Make a new calculated table, and give it the following query (in the Query field): SELECT Time, Object, ISNULL(Output - LAG(Output) OVER (PARTITION BY Object), 0) AS OutputPerHour FROM StatisticsCollector1 This query uses SQL window functions. Basically, it says that each row's value should subtract the previous row's value for the object. In addition, if that value is NULL (because it's the first row), then just use a value. If you reset and run the model, so that the collector table has at least a few rows in it, click the Update button to run the query. Notice that the Time and Object columns show numbers. This is because the Calculated Table can't infer the formatting of the column. To set the formatting, use the Display Format Tab. You may also wish the table to update every hour, with the Statistics Collector. Make the Chart Now that our data is correct, we can make a chart. Make a new dashboard, and create a Time Plot chart. Point the chart to the calculated table. Let's use the Time column for the X values, and let's use the OutputPerHour column for the Y values. In addition, make sure to split by the Object column. If the calculated table updates every hour, then running the model should create the chart shown at the beginning of the model. Here is the model used to create this chart (should work in 2017 Update 2 Beta or later; beta must be built on or after August 21, 2017). outputperhourdemo.fsm
View full article
One of the most powerful features of Process Flow is the ability to easily define a Task Sequence. However, many real-life situations require the coordination of multiple workers and machines to do a single task. This article demonstrates one approach you can use with Process Flow to coordinate multiple Task Executers, or in other words, to create a coordinated task sequence. This article talks about an example model (handoff.fsm). It might be easiest to open that model, watch it run, and perhaps read this article with the model open. The Example Scenario Here is a screenshot of the demo model used in this article: Items enter the system on the left. The yellow operator must carry each item to the queue in the middle, and then wait for the purple operator to arrive. Once the two operators are both at the middle queue, then the yellow operator can unload the box, and the purple operator can take it. After this point, the yellow operator is free to load another item from the left queue. The purple operator takes the item, waits for a while, and then puts the item in the sink on the right. The interesting part of this model is the hand off. The yellow operator must wait for the purple operator, and vice versa. This is the synchronization point, and it requires coordination of both operators. The approach used in this model allows you to add more operators to the yellow side, and more to the purple side. But it still maintains that a yellow operator must wait for a purple operator before unloading the box. The Example Model In addition to to the 3D layout shown previously, there are 5 process flows in the example model. The first is a general flow, and defines the logic for each task. The second is a Task Executor flow, and defines the logic for the yellow operator. The third is also a Task Executor flow, and defines the logic for the purple operator. The remaining two flows are synchronization flows, for synchronizing between the other three flows. Synchronizing on a Task The basic approach in this model uses the Synchronize activity. This activity waits for one token from each incoming connector, before it allows any of the tokens to move on. Here is the Yellow Purple Sync flow (a global Sub Flow) from the example model: The flows for the yellow and purple operators each use the Run Sub Flow activity to send a token to this flow, to their respective start activities (you can use the sampler on the Run Sub Flow activity to sample a specific start activity in a sub flow). This is what allows both the yellow and purple operators to wait for each other. However, it is important that the yellow and purple operators are both doing the same task. In this model, there is a token that represents each item that needs to be moved. Both operators get a reference to this task token. The Synchronize activity is set up to partition by that Task token. That means that a yellow and purple operators must both call this sub flow with the same task token, ensuring that each task has its own synchronization. In the example model, this kind of synchronization happens between operators, and it happens between each task and an operator. Basically, the task must wait for the operator to finish that operator's part. The Task Flow A task token is created every time an item enters the first queue. The tasks flow puts that task on both the Yellow and Purple lists. In both cases, the task token does not wait to be pulled, but keeps itself on the list. Then, the task token waits for a yellow operator to finish with it, and then for the purple operator to finish with it. There is a zone in this flow, but its only purpose it to gather statistics for how long the whole task took. The Yellow and Purple Flows These flows are easiest to understand when viewed side by side: Recall that each task is put on both the Yellow and Purple lists at the exact same model time. The yellow operator waits to get a task (at the Get Task activity). Then the operator travels to the first queue, gets the item, and travels to the second queue. At this point, the yellow operator waits. At the same time, the purple operator is also waiting for the task. The purple operator just has to travel to the second queue before waiting for the yellow. Once the yellow operator arrives, the purple operator also has to wait for the yellow operator to unload the box. On the yellow side, once the purple operator arrives, the yellow operator unloads the box, and then synchronizes with the purple operator, allowing the purple operator to load the box. Summary The purpose of this article is to show one method for synchronizing token in separate flows. That method is as follows: Have a token for each task. As each task executor (or fixed resource) needs to synchronize, they each use a Run Sub Flow activity, putting the token in a specific Start activity. The Sub Flow (a global Sub Flow) has a synchronize activity, that requires a token from each participant for that task before releasing the tokens. This is certainly not the only way to create this model. However, there are some advantages: By forcing the task to synchronize, you can gather stats on how long each phase of the task took, as well as how long the complete task took. You can add more yellow or purple operators by copy/paste. They simply follow their own logic Each set of logic is separated; tasks, yellow operators, and purple operators each have their own flows, making each one much simpler. The exact approach used in the example model will not work exactly as it is for each model. However, you can apply the general principles, and adapt them to your own situation.
View full article
Attached is an example model that shows how you can use reversible conveyors for routing/sorting of items. ReversibleRoutingConveyor.fsm Traditionally we've sort of warned against using reversible conveyors for purposes other than accumulation buffers. The main reason I've been hesitant to promote alternative uses is that the routing system for conveyors is, and will continue to be, static. In other words, the path finding algorithm to send an item through a network of conveyors to a destination point does not change when one or more conveyors in the system is reversed. Put another way, "for routing purposes, ..., the conveyor is always assumed to be conveying in its original direction." This naturally makes using reversible conveyors for routing more complex. However, as long as you can still work within those constraints, you can actually get the desired outcome. The attached model does this by 'shortening' the routing decision so that it can always route onto conveyors in their forward direction. The attached model sorts items by color by moving them between two conveyor via a reversible conveyor that conveys in either direction as needed. In order to still work within the 'static routing' rule, I split the reversible conveyor into two separate conveyors that are directed into each other. This way, I can route items onto the reversible section by referencing a conveyor whose primary forward direction always diverts from the line a given item is on. The critical element is that I have to always make sure that when one conveyor is moving forward, the other is reversed, and vice versa. I also have to implement some mutual exclusion, blocking some items so they aren't sent to conveyors in opposite directions. This all is done in the process flow. I honestly don't know how close this example is to a real life situation. We've just received some requests for a reversible conveyor that can do more than just accumulation buffers, and routing/sorting is the main alternate example I can think of. This is one way you can achieve such a result.
View full article
This article explores an example model. In this model, items on downstream lanes are able to reserve dogs so that items on upstream lanes cannot use them: reservedogdemo.fsm About Dogs in FlexSim FlexSim simulates dogs on a power-and-free system in an extremely abstract and minimal way. A dog isn't a persistent entity at all. Instead, FlexSim calculates where dogs would be, given the speed, and when they would interact with items. This has a huge performance benefit. But if your logic needs items to interact with specific dogs, this can pose a problem: how do you interact with such an abstract entity? The Catch Condition The only time you can "see" a dog in FlexSim is during the Conveyor's Catch Condition: https://docs.flexsim.com/en/23.1/Reference/PropertiesPanels/ConveyorPanels/ConveyorBehavior/ConveyorBehavior.html#powerAndFree The catch condition fires when a dog passes by an item. If the catch condition returns a 1, the item catches the dog and transfers to the power and free conveyor. If the catch condition returns a 0, the item does not catch the dog. During the catch condition (and only during a catch condition), you can learn many things about a dog: ID - each dog has an ID. The ID is derived from the length of the conveyor and by the distance the conveyor has travelled. If a conveyor is 26 dogs long, the dogs will have IDs 1 through 26. Location - Since an item is trying to catch the given dog, you can derive the dog's location from the items location. Speed - The conveyor that owns the dog is "current" in the catch condition. So you can get the speed of the conveyor at that point. We'll use all these pieces of information in a moment. Creating Tokens to Represent Dogs The first real insight into this model is to make a dummy item. The purpose of this dummy item is to cause the Catch Condition to fire. It never gets on the conveyor. But when the catch condition fires, it makes a token that represents the dog. In this example, that item has a label called "DogFinder" Here is the relevant code from the catch condition: if (item.DogFinder?) { Object pe = current.DogPE; if (pe.stats.state(1).value == PE_STATE_BLOCKED) { return 0; } double dist = current.MaxDogDist; double speed = current.targetSpeed; double duration = dist / speed; if (!item.labels["DistAlong"]) { item.DistAlong = Vec3(item.getLocation(1, 0, 0).x, 0, 0).project(item.up, current).x; } Token token = Token.create(0, current.DogHandler); token.DistAlong = item.DistAlong; token.Conveyor = current.as(treenode); token.Duration = duration; token.DogNum = dogNum; token.Speed = speed; token.DetectTime = Model.time; token.release(1); return 0; } There's a lot going on in this code: This logic only fires for the fake dog finder item If the photo eye just upstream from the dog is blocked, that means there is an item, and this dog is not available. Return here if that's the case. Figure out how long this dog will last (the duration), assuming the conveyor runs at the same speed. In this model, there's a label on the conveyor called MaxDogDist. This is the distance from the PE to the end of the conveyor, minus 2 meters. If this is the first dog ever, calculate the position of the dog, given the position of the item. Store that on a label. Create a token with all kinds of labels. We'll need all this information to estimate where the dog is later, and to estimate how far it is from other items. Pushing Dog Tokens to a List Once the token is made, we need to push it to a list, so that items can pull them. If all your items are the same size, you can just push the token to a list directly. In this model, however, there are larger items that require two dogs. So there's a batch activity first. The dummy item is far enough back that it can detect two dogs and still push the first dog to the list in time for the first lane. So it holds the dog back in a Batch activity until one of two things happen: Either the next dog token appears, completing the batch. Or the max wait timer on the batch expires, indicating that the next dog is not available. Otherwise, there would have been a token. This duration is based on the conveyor's speed and dog interval. If the batch is complete, the first dog in the batch can be marked as a "double", meaning the dog behind it is also available. Once the flow has determined whether the dog is a single or double, it then pushes it to the list. Creating the DistToDog Field When pulling the dog from the list, an item needs to know the position of the dog relative to the item. Is it 0.3 meters upstream? Or is it 2 meters downstream? When we query the set of dogs, we need to filter out downstream dogs and order by upstream dogs, to reserve the closest one: WHERE DistToDog >= 0 ORDER BY DistToDog Here, DistToDog is positive if the dog is upstream, and negative if the dog is downstream. The code for this field is as follows: /**Custom Code*/ Variant value = param(1); Variant puller = param(2); treenode entry = param(3); double pushTime = param(4); double distAlong = Vec3(puller.getLocation(1, 0, 0).x, 0, 0).project(puller.up, value.Conveyor).x; double dt = Model.time - value.DetectTime; double dx = value.Speed * dt; double dogDistAlong = value.DistAlong + dx; return distAlong - dogDistAlong; This code assumes that the item waiting to merge is the puller. So we calculate the item's "dist along" the main conveyor. Then we estimate the location of the dog since the DogFinder item created the token. Then we can find the difference between the item's position and the dog's position. Pulling Dogs from the List Each incoming lane has a Decision Point. The main process flow creates a token when an item arrives there. At a high level, this token just needs to do something simple: pull an available downstream token. If all the items are the same size, it's that simple. But this example is more complicated! If the item is large, we also need to pull the upstream dog behind the dog we got, so that no other item can get that dog. And it gets even more complicated! It can happen that an item acquires the dog after a double dog. In that case, we need to mark the downstream dog as "not double", so that big items won't try to get it. So most of the logic in the ConveyorLogic flow is handling that case. Using the Dog Finally, the item must be assigned to that dog. The ConveyorLogic flow sets the DogNum label on the item. Then, the catch condition checks to see if the dog matches the item's DogNum. Upstream Items The final piece of this model is allowing upstream items to catch a dog on this conveyor. This model adds a special label to those items called "ForceCatch". The catch condition always returns true for those items.
View full article
Top Contributors