FlexSim Knowledge Base
Announcements, articles, and guides to help you take your simulations to the next level.
Sort by:
This is a demo model for the new warehouse functionality found in version 2019 Update 2: warehouse-demo-model.fsm The basic premise of this model is that items of a particular type come in, and must be placed in slots for that type. Orders also come in, requiring items of a particular type, that must be retrieved from storage. The model is meant to be a general concept model. It demonstrates the use of many of the new features in 19.2, and embodies some high-level "how-to's" of warehousing that are discussed in the user manual. Most logic for the model is implemented in a process flow. The process flow logic is separated into three main categories, namely initial inventory, inbound, and outbound processes. Further, the outbound process demonstrates both random-based order generation as well as history-based order generation. Initial Inventory The model includes a Global Table of Initial Inventory. The process flow's initial inventory section reads this table, and then creates items and places them into slots based on that initial inventory. This logic relies on the Address Scheme defined in the Storage System object, and uses direct addressing to get a slot using Storage.system.getSlot(). Inbound I use the process flow to assign a slot to each incoming item. I use an Assign Labels activity called Find Slot to do this. This uses a pick list option that wraps a call to Storage.system.findSlot(). The query matches the Type of the item with the Type of the slot, and also ensures that the target slot has space to fit the incoming item. The query also randomizes the order. Randomizing the order would likely not be necessary in most situations, but it makes the demo look nice. If the Find Slot activity properly finds a slot to store the item, then I go ahead and assign the the item to that slot, and have an operator place it in the rack. Outbound I also use the process flow to generate orders, and to reserve items in the storage system for those orders. In most warehouse simulations, order generation can be driven in two ways. First, you can use random probability distributions to generate orders based on general throughput metrics. Second, order generation can be based on historical data. This model gives an example of each method. In the random method, orders are generated randomly every ~30 seconds. Each order includes a number of SKU line items (again, random) and each line item includes a quantity of that SKU (again, random). Order tokens spawn line item tokens, which in turn spawn tokens associated with individual picks (the Fill Out Individual Picks process). For each pick, the token finds an item in storage that matches the target SKU. This is an Assign Labels activity (Find Item by SKU) with a pick option that wraps a call to Storage.system.findItem(). It finds an item that matches the required type, again using a query. Once the item is found, it makes reserves the item as "outbound" by assigning the Storage.Item.assignedSlot property to null (Set to Outbound activity). This ensures that no other process will find that same item for picking. The history-based order generation process uses much of the same functionality as the random-based, but it instead reads an "OrderHistory" table to determine when orders are started and what those orders contain. The OrderHistory table represents a simplified format for what you would likely see in a standard orders table. First, the process flow creates a transformed table that aggregates each order into a single row (this could technically be done as post-import code, but I do it in the process flow for visibility). Then the process flow loops through that transformed table, waiting for the start time of each order, then spawning that order. Custom Rack Visualization I have also customized the visualization of the racks. I have added a text to the front (and bottom on the floor) of a rack slot that will show the address of that slot. Further I've given the text a background that is color-coded to the SKU that that slot is designated to store. This was all done through the Storage System's Visualizations tab. I customized the Rack visualization.
View full article
Hi everyone, recently @Arun Kr posted this idea to add a utilization vs. time chart to the available chart types in FlexSim. I had previously build a relatively easy to set up Statistics Collector for use in our models. I have since cleaned up the design a bit and thought to post it here, since this seems to be a commonly desired feature. utilization_vs_time_collector_24_0_fm.fsm All necessary setup is done through labels of the collector. The first three are actually identical to labels found on the default Statistics Collector behind a state bar chart. - Objects should point at a group that contains all objects the collector should track the utilization for. - StateTable is a reference to the state table that will be used to determine which state counts as 'utilized'. - StateProfile is the rank of the state profile that should be read on the linked objects (0 for default state profile). - MeasureInterval is the time frame (in model units) over which the collector will take the average of the utilization. - NumSubIntervals determines how often that measurement is actually taken. In the example image above (and the attached model) the collector measures the average utilization over the last 3600s 12 times within that interval of every 300s. Each meausurement still denotes the utilization over the complete "MeasureInterval". The graph on the left takes a measurement every 5 minutes, the one on the right every 60 minutes. Each point on both graphs represents the average utilization over the previous hour from that time point in time. - StoredTimeMap is used to allow the collector to correctly function past a warmup time, by storing the total utilized value of each object up to that point. This should no be manually changed. Since this last label has to be automatically reset, remember to save any changes made to the other labels by hitting "Apply". The collector works by keeping an array of 'total utilized time' value for each object as row labels. Whenever a measurement is taken, the current value is added to the array and the oldest one is discarded. The difference between the newest and oldest value is used to calculate the average utilization over the measurement interval. The "NumSubIntervals" label essentially just controls how many entries are kept in that array. To copy the collector into another model you can create a fresh collector in the target model. Then copy the node of this collector from the tree of the attached model and paste it over the node of the fresh collector. I hope this can help to speed up the modeling process for some people (at least until a chart like this is hopefully implemented in FlexSim) or serve as inspiration for how one can use the Statistics Collector. I might update the post with a user library version if I get to creating it (and if there is demand for it). Best regards Felix Edit: Added user library with the collector as a dragable icon to the attached files. Edit2: I noticed a bug while using the collector. Having the tracked objects enter states that are marked as "excluded" in the state table would lead to incorrect utilization values (possibly even below 0 or above 100%). Replaced the library with an updated version that fixes this. Edit3: I fixed another bug that resulted in a wrong utilization value for the first measurement after the warmup time if the object spent time in an excluded state prior to the warmup. utilization-vs-time-collector-library-20250212.fsl
View full article
This model and library will allow you to produce a heat map of anything moving in the model - including AGVs and Flowitems. To add this to a model is simply a matter of : 1) Load the attached user library 2) Add objects to the group HeatMapMembers 3) Drop a heat map object (cylinder) into the model - reset and run. With this updated version you can now you can now have multiple mapper objects in the same model showing different groups - made easier by the addition of a 'groupName' label on the mapper. You can easily change the height at which the map is draw using the 'zdraw' label and alter the sampling interval and grid size using the 'heatInterval' and 'resolution' labels. The resolution is the number of divisions per model length unit. In the example model set to metres, a value of 2 gives 4 divisions per square metre. Currently non-flowitems are set to ignore time when the object is in an idle state. HeatMapAnything.fsl HeatMapAnything.fsm
View full article
Attached is an example model and user library comprising commands to return an array of objects whose bounding boxes intersect, and a Collision Detection object to drop into your model. The Collision Detection has a ticker interval label to adjust the frequency of checks and will switch the colliding objects to selected. It looks for two groups - "Obstacles" containing static objects in the scene (which may be overlapping and not recorded as collisions) and "Colliders" which are the objects navigating the scene and should be checked for intersecting bounding boxes. In the example model I'm adding the flowitem when it is created using Group("Colliders").addMember(item) The detector code is on its FlexScript label, 'analyseScene', which is first scheduled to run by the object's reset trigger. collisionDetection3.fsm BBCollisionDetection2.fsl
View full article
Como criar animações customizadas em máquinas ou equipamentos no FlexSim é o que demonstramos nesse rápido vídeo. Customizamos uma animação no objeto processor, criando a movimentação real de uma maquina envolvedora de filme stretch. Veja o tutorial completo de como executar essa tarefa passo a passo, acessando o Canal Youtube da FlexSim Brasil. Ainda neste artigo, estou anexando uma pasta chamada 'Wrapper' com os arquivos em 3D para os interessados que quiserem criar as customizações acessarem e usarem os arquivos. wrapper.raranimationcreator.fsm
View full article
FlexSim's Webserver is a query-driven manager and communication interface for FlexSim. It allows you to run FlexSim models through a web browser like Google Chrome, FireFox, Internet Explorer, etc. Since the FlexSim Web Server is a basic service to allow FlexSim to be served to a browser, you may decide you want a way to proxy to this service through a full service web server that you can control security and authentication through. This guide will walk you through proxying to the FlexSim Web Server through Apache web server. Install the FlexSim Web Server Program Download and install the FlexSim Web Server from https://account.flexsim.com Edit C:\Program Files (x86)\FlexSim Web Server\flexsim webserver configuration.txt Change the port from 80 to 8080 Start the FlexSim Web Server by double clicking flexsimserver.bat Test the server by going to http://127.0.01:8080 It should look like this: Install Apache Web Server for Windows Download Apache x64 from https://www.apachehaus.com/cgi-bin/download.plx Extract the httpd-<version>.zip file you have downloaded Go into the httpd-<version> folder you have extracted and copy the Apache24 (or Apache25) folder to C:\ Install Apache Dependencies Make sure you have the Microsoft Visual C++ 2008 SP1 package installed. You can get it here: https://www.microsoft.com/en-US/download/details.aspx?id=26368 Download the vcredist_x64.exe package and run the installation Configure Apache Open C:\Apache24\conf\httpd.conf in a text editor Look for the following lines in this configuration file and remove the # character #LoadModule proxy_module modules/mod_proxy.so #LoadModule proxy_http_module modules/mod_proxy_http.so #LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so Those modules should now look like this: LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_wstunnel_module modules/mod_proxy_wstunnel.so At the bottom of the httpd.conf file, add these 3 lines and save the file: ProxyPass / http://127.0.0.1:8080/ ProxyPassReverse / http://127.0.0.1:8080/ AllowEncodedSlashes On Run Apache Web Server In a file explorer or CMD line prompt, browse to the C:\Apache24\bin folder and run httpd.exe Now that you have the FlexSim Web Server proxied through Apache, you may decide you want to configure Apache to handle security, authentication and customization. Since this is out of the scope of this guide, you can find details on the Internet that can guide you to setting these customizations up. A few resources you may consider: https://community.apachefriends.org/f/ https://stackoverflow.com
View full article
This article reviews one method for making a state Gantt chart for the default and alternate state profiles: Example Model You can download the model for this walkthrough ( stateganttdemo.fsm). The model has two multiprocessors, in a Group called Multiprocessors. Each multiprocessor has two processes: Process1 and Process2. To make the chart, we will first make a Statistics Collector, and then a Calculated Table. Making the Statistics Collector Make a new Statistics Collector. On the Event Listening tab, use the Sampler to listen to On State Change of the group of multiprocessors. You can leave the parameter names alone. However, we need to add a label, so we can record the profile number. Select the new event, and then use the green plus button in the Event Labels area to add a label for this event. Set its name to ProfileNum, and its value to the following code: data.StateProfileNode?.rank The event settings should look something like the following: Next we need to set the row mode. Make sure it's set to Add Per Event, with no row value. As the final configuration step for the statistics collector, we need to set up the columns. There should be four columns in this collector: Time - In the pick options, select Time, then Model Date/Time Object - In the pick options, select IDs, then ID of Event Object Profile - Type data.ProfileNum for the value. The default storage and display format are fine. State Type the following code: data.eventNode.as(Object).stats.state(data.ProfileNum).profile[data.ToState + 1][1] Set the Storage Type to String The code is necessary because On State Change occurs before the state is set to the new state. So the code is looking up the name of the future state in the profile table. When you reset and run this model, you will see a table like the following: Making the Calculated Table Make a new Calculated Table, and give it the following query: SELECT Object, Time as StartTime, LEAD(Time) OVER (PARTITION BY Object) AS EndTime, State FROM StatisticsCollector1 WHERE Profile = 1 This query creates an Object column as well as a Time column. To get the time that the current state ends, we look to when the next state begins. The LEAD() function looks ahead in the table, and the OVER(PARTITION BY Object) clause makes sure that LEAD() makes sure to look to the next row with the same Object. We also record the state column, and filter out the standard state profile, keeping the special multiprocessor state profile. Once you get this query to work, change the Update Mode to By Interval, and set the interval to 20 or 30. Since the Statistics Collector table will get longer and longer, the query will become more and more expensive as the model runs. To control how much time is spent running the query, we use an interval. The final configuration of the Calculated Table should look like this: You will need to set the Display Format of each column on the Display Format tab (Object, Date/Time, Date/Time, and Raw). Making the Chart Make a new dashboard, and create a Gantt chart. Point it at the Calculated Table. When you do that, the chart should fill in all the other columns correctly. Charting Both State Profiles for Both Objects In order to chart both profiles on the same chart, we first need to add a column to the Statistics Collector, and then update the query in the Calculated Table. The new column should be named ObjectAndProfile, and a Storage Type of String. Use the following code for a value: data.eventNode.name + " - " + string.fromNum(data.ProfileNum.as(int)) Then change your query to the following: SELECT ObjectAndProfile, Time as StartTime, LEAD(Time) OVER (PARTITION BY ObjectAndProfile) AS EndTime, State FROM StatisticsCollector1 With these changes, you should be able to view both profiles for both multiprocessors.
View full article
FlexSim 2022 introduced a Reinforcement Learning tool that enables you to configure your model to be used as an environment for reinforcement learning algorithms. That tool makes connecting to FlexSim from a reinforcement learning algorithm easier, but that tool is not absolutely necessary for this type of connectivity. The same socket communication protocols that are used by that tool are available generally in FlexScript. Attached (ChangeoverTimesRL_V22.0.fsm) is the FlexSim 2022 model that you build as part of the Using Reinforcement Learning documentation that walks you through the process of building and preparing a FlexSim model for reinforcement learning, training an agent within that model environment, evaluating the performance of the trained reinforcement learning model, and using that trained model in a real production environment. Also attached (ChangeoverTimesRL_V6.0.fsm) is a model built with FlexSim 6.0.2 from 2012 that does the exact same thing, but with custom FlexScript user commands instead of the Reinforcement Learning tool. You can use this model with the example python scripts and FlexSim 6.0.2 in the same way that you can use the other model with those same scripts in FlexSim 2022. I'm providing this FlexSim 6 model as an example that demonstrates how you can communicate between FlexSim and other programs. The Reinforcement Learning tool certainly makes this type of communication easier and simpler, with a nice UI for specifying RL-specific parameters, but the fundamental principles of how this works have been available in FlexSim for many years using FlexScript. Hopefully this example can help teach and inspire those who wish to control or communicate with FlexSim from external sources for purposes other than just reinforcement learning. FlexSim is flexible, and the possibilities are endless.
View full article
Tokens and Flow Items can be very difficult to add to a chart. This is true because they don't exist on Reset, making them difficult to select. This article shows how you can use a Process Flow to allow a Statistics Collector to record a token's changing label value, and also to chart that value over time: The model for this example is attached (graphlabeldata.fsm). It is a very simple model: The Scheduled Source creates three tokens, each of which create a label called data. This label is created by choosing "Add Tracked Variable" for the value, which opens this dialog box: The reason we want the label to be a Tracked Variable is that Tracked Variables emit an OnChange event. We want to listen to that event. If you use the time interval collection method, discussed later in this article, you don't need to make the label a Tracked Variable. Each token then goes through a loop, where it waits, and then updates the value. This is meant to represent a much more complicated model, where the token travels through many activities, any of which could change its label value. For this example, the model randomly changes the value on the label. Now that we have a token and a label whose value is changing as the model runs, we can work on making a chart. We want to eventually make a Statistics Collector, but Statistics Collectors can only listen to events of objects that exist after the model has been reset. Tokens and FlowItems (along with their labels) are destroyed on reset, and so we can't listen directly to them. However, some Process Flow activities can listen to events on tokens and flowitems, and the Statistics Collector can listen to those events. For that reason, we make a second Process Flow: This flow has an Event-Triggered Source, which listens for tokens to leave the "Init Tracked Variable" activity in the first flow. When that happens, the source creates a token, and that token immediately gets a reference to the label node (note that this is different than the value of the label node). Next, the new token goes to a Start activity. The Start activity called "Log Change." This activity is just a placeholder. While you could technically live without this activity, it makes things a little clearer, as we will discuss later. Other than providing OnEntry and OnExit events, the Start activity has no internal logic whatsoever. After passing through the Log Change activity, the new token waits for the label value to change: In order to listen to this event, you can first sample a Tracked Variable in the Toolbox. This provides the OnChange event. Then you can update the Object field to the code shown above. Notice that every time this event happens, the token simply passes through the Log Change activity, and then resumes listening. When the original label value changes, it emits an OnChange event. When that event fires, the token listening to that even travels through the Log Change activity, which emits OnEntry and OnExit events. We can use these events in the Statistics Collector. The key to this technique is that we used Process Flow, which is good at listening to token and flowitem events, to generate Activity events, which can be used in the Statistics Collector. In the attached model, the first Statistics Collector is configured like this: It simply listens to the On Entry of the Log Change activity. The columns are defined as follows: The first two columns are simpler; the Time column uses the Model Date/Time option: The second column gets the ID of the token as an integer: The third column gets the current value of the Data label: Now that the Statistics Collector is set up, we can configure the chart to use this collector, and split by the Token ID. The process to record the label value every interval (rather than on every change) is very similar. The downside is that the data is less granular, but the upside is that a label doesn't have to be a Tracked Variable to be charted. The example model simply uses a Split activity to copy the data from the Event Triggered Source, and sends it to a similar listening loop: Instead of waiting for the value to change, the second token waits for a fixed time interval. A similar Statistics Collector will allow you to create the following chart: This approach works for every token created by the scheduled source. No matter how many tokens you create, each will show up on the chart:
View full article
The attached model contains functionality to depict the item flow as a 3D map using a FlowMapper3D Object (cylinder) and an associated Object Process Flow. Additionally a 'kpi' label on the object gives an indication of layout performance to which you can link and observe as you interact/experiment on the layout. To set this up in your model you'll need to add a Group of objects whose entry events will be used by the mapper - calling that Group "FlowMapperObjects". Then you'll need to add a ColorPalette called "HeatPalette". Finally you'll want to copy the FlowMapper3D object and the FlowMapperProcess to your model. Note that there is a boolean label 'showPercents' on the FlowMapper3D object to tell it whether to show percentage text or the number of flowitems for each location pair. 3DFlowMapper.fsm
View full article
In version 2018 and on, you can make this chart by dragging the Throughput Per Hour by Type template from the dashboard library. If you install the template (available on the Advanced tab), you will see a Process Flow and a Statistics Collector appear in your toolbox. One of the most common questions from FlexSim users is as follows: How do I make a chart that shows the output every hour? You can make this chart in three steps. Configure the Statistics Collector First, you need a Statistics Collector. Make a new one in the toolbox (click the green plus button, select Statistics, and then select Statistics Collector). On the event listening tab, use the green plus button to add a timer event, and configure as shown here: This timer event will fire every hour (every 3600 seconds) in the model. Notice the shared label, that is storing all members of the Processors group as an array. We will use this label in the next step. Once you have configured the timer, then you need to set up the row mode for this collector. We want one row per processor, and we need to use the Processors label as the row value. Since the Processors label is an array, we will get three rows per timer event, each row corresponding to a processor. Finally, we can add the columns. The three columns are as follows: Time - use the pick list to select Model Date/Time from the Time menu Object - use the pick list to select ID of row value from the IDs menu Output - use the pick list to select Statistic by Object from the Object Statistics menu Use data.rowValue as the object value in the popup If you use the pick options to choose these options, then the storage type and display format options should be set automatically. With these three columns in place, we can watch the table populate. Reset and run the model at high speed. Every model hour, you should see a new set of rows appear, one for each processor in the group. The table will look something like this: Configure the Calculated Table The Statistics Collector table from the previous steps is close to what we want, except that the output value always increases as the model runs. But what about the output for just a single hour? To get that value, we can use a Calculated Table. Make a new calculated table, and give it the following query (in the Query field): SELECT Time, Object, ISNULL(Output - LAG(Output) OVER (PARTITION BY Object), 0) AS OutputPerHour FROM StatisticsCollector1 This query uses SQL window functions. Basically, it says that each row's value should subtract the previous row's value for the object. In addition, if that value is NULL (because it's the first row), then just use a value. If you reset and run the model, so that the collector table has at least a few rows in it, click the Update button to run the query. Notice that the Time and Object columns show numbers. This is because the Calculated Table can't infer the formatting of the column. To set the formatting, use the Display Format Tab. You may also wish the table to update every hour, with the Statistics Collector. Make the Chart Now that our data is correct, we can make a chart. Make a new dashboard, and create a Time Plot chart. Point the chart to the calculated table. Let's use the Time column for the X values, and let's use the OutputPerHour column for the Y values. In addition, make sure to split by the Object column. If the calculated table updates every hour, then running the model should create the chart shown at the beginning of the model. Here is the model used to create this chart (should work in 2017 Update 2 Beta or later; beta must be built on or after August 21, 2017). outputperhourdemo.fsm
View full article
Using Mixamo Fuse, Mixamo, and (optionally) 3ds Max, you can create animated characters for use with FlexSim. For modifying the existing operators' animations, see Adding More People Animations From Mixamo Character Creation You can customize a character’s look using Mixamo Fuse: After creating the character, you can save your character as a .fuse file in case you want to edit it again later in Mixamo Fuse. Character Rigging From Mixamo Fuse, you can press the Animate button in the upper-right corner to upload the character to Mixamo’s website to apply animations. You will need to log in with a Mixamo account. You will be presented with the Mixamo Auto-Rigger. After a few moments, your character will appear in a view with a default animation applied. Choose whether to enable facial blendshapes and select the appropriate level of detail for the skeleton, such as 2 Chain Fingers, and then press Update Rig. The auto-rigger will reapply and the view will be updated with the newly rigged character. If the settings are good for your needs, press the Finish button. Character Animating Once the character is rigged, you can select it on the Mixamo website and press Find Animations to find animations to apply to the character. You can search for animations and add them to packs. For each animation, you can adjust various parameters to fit your character to that animation better. Exporting the Character and Animations Once you have applied and customized animations for your character, you can add them to the Cart and checkout in order to have them available for download. After checking out, you can select an animation or animation pack to download on the My Animations page. You can apply the same animation sets to different characters using the Change Character button after selecting the animation set. Press the Queue Download button to tell the Mixamo server to create the files for download. You want to use the FBX format so that we can edit it with 3ds Max. The pose can probably be T-Pose or Original because the Original is probably a T-Pose anyway. I use 30 Frames per Second for consistency and ease of timing animations. Keyframe Reduction didn’t seem to do anything in my tests, and you can do it yourself with more control in 3ds Max so I use “none.” If you exported multiple animations, the output will contain one fbx file that is the character shape and several other fbx files that contain only animation data for each animation. (Update) Important Note: Starting with FlexSim 2017 Update 2, the rest of the steps in this document are no longer necessary. FlexSim 17.2 added support for embedded textures, specular maps, and gloss maps that fix the material issues directly without modification in 3ds Max. FlexSim 17.2 also added support for assigning animations to a shape from multiple files, so you can import the mixamo output files directly without having to merge all the animations into a single file. The following steps can be used if you want to optimize or otherwise modify the shape files, but are no longer required. Preparing the Character using 3ds Max Import the character shape .fbx file into 3ds Max and save the file as a .max file. You will want to resave as a .max file at different points with different file names as you progress so that you can easily revert back to a previously known working point if something messes up. The FBX Import window will show you lots of options you can modify. The defaults seemed to work fine. I tried messing with the “Units” scale factor and file unit conversions, but I was unable to find any options that improved the scaling in FlexSim. Ultimately, I found it best to just use Automatic and sort out the units myself using FlexSim’s shape factors. Export the file as fbx and bring it into FlexSim to see how it looks. Again, the default options seem to work fine. In FlexSim, the size will probably be 100x too big (no matter what “unit conversion” options I seemed to set in the import/export options). Resize the character to be 100x smaller. Fixing the Materials in 3ds Max When you first bring in a Mixamo Fuse character into FlexSim, it will probably look shiny and dark. That can be fixed in 3ds Max. Open the Material Editor using the button Double-click each material in the list of Scene Materials on the left to add it to the middle view for editing. Press the “Lay Out All – Vertical” button to arrange the nodes nicely in the middle view. When you imported the .fbx file, 3ds Max should have automatically unpacked its textures into a directory called CharacterFileName.fbm. When you double-click on a texture node in the Material Editor view, you should be able to see and edit the path where it is referencing each texture in the Material Parameter Editor pane on the right. Removing the shininess Each material has Ambient, Diffuse and Specular colors, in addition to any textures that are mapped to various channels. The operator is shiny everywhere because each material’s Specular color is set to White and their specular maps are mostly black. Since FlexSim doesn’t currently read specular maps and gloss maps, the white specular color is being applied everywhere instead of the black color from the specular maps. To fix it, click each specular map and gloss map in the center view and delete them with the Delete key. Then double-click on each main material and set its specular color to black to turn off the shininess. Save the .max file with a new name, and re-export the .fbx file to test it in FlexSim. The shininess should be gone. Brightening the darkness FlexSim uses Assimp to read fbx files. Assimp’s fbx importer is setting a default Ambient color value of dark gray to the materials instead of leaving it off. This is why the character looks darker than he should. To fix it, we can simply specify an ambient color of white on each material. For each material, set the ambient color to white and check the Ambient Color box under Maps: Save the .max file with a new name, and re-export the .fbx file to test it in FlexSim. The darkness should be fixed. Setting local texture paths Based on the procedure above, the texture paths are likely to be absolute paths when you export the fbx file. You can fix that by saving the .max file, the .fbx file, and the textures all in the same directory. In Windows Explorer, copy all the useful textures from the CharacterName.fbm directory into the same directory where you have been exporting the .fbx file. Then also save your .max to that same directory. Once everything is in the same directory, you can reselect the texture file paths for each material so that they are pointing at these files instead of the other files. Then, when you export the fbx file, each texture’s path will be relative so you can copy all the necessary files onto any computer and use them correctly. After changing all the materials, export the fbx file again. You should be able to copy the fbx file and all its textures into a directory on a different machine and have them display properly. Save your .max file with all the updated materials. Making part of the object change color To make a material show the FlexSim color, append _fsclr to its name in 3ds Max: Configuring Animations With the character loaded, you can import the fbx files with just animation data, and it will automatically apply that data to the shape. The slider at the bottom controls the currently applied keyframe. The buttons in the bottom corner play the animation, pause, or step between keyframes. Animation data can be stored beyond the relevant range. You can open the Time Configuration dialog by pressing the button in the bottom corner. In that dialog, you can specify the speed of the playback and set a Start Time (keyframe number) and End Time for the animation you want to see in the viewport when you play. This dialog is only changing the playback options in the software; the actual data outside the specified range is still preserved. You may need to adjust these values as you import/modify/combine animations into one Timeline. Keyframes If you push the button on the top toolbar, it will open the Track View – Curve Editor. You can edit and modify Keyframes through this view. Click in the left pane and press Ctrl-A to select everything in the model. You should see keyframes appear if you have animation data loaded. Sometimes this window doesn’t refresh correctly and you can’t find keyframes. You can try closing it and reopening it, or pressing the and buttons to re-center the view on the keyframes in the configured time range. To edit the animations, you need to be able to see the keyframes in this view. You can drag the current frame by dragging the yellow bar around. You can zoom by scrolling the mouse wheel. Ctrl-mouse wheel will zoom the timeline (X-axis) without zooming the key values (Y-axis). You can select keyframes by dragging a rectangle around them. You can delete selected keyframes with the Delete key. You can move keyframes by dragging them. If you hold Ctrl while dragging them, then it will force the movement to be only along the X-axis and not along the Y-axis so that you don’t actually edit the keyframe’s values, just its time. You can duplicate keyframes by holding Shift and dragging them. Again, holding Ctrl will prevent the new keyframes’ values from shifting by holding the Y-axis still. You can scale keyframes by selecting them, putting the key keyframe indicator on the first keyframe in the selection, picking the menu option Edit > Transform Tools > Scale Keys Tool, and then clicking and dragging on one of the selected keyframes to scale all of the selected keyframes towards the yellow bar. This can be helpful to sync the timing of animations, such as walking empty vs. walking loaded. Preparing Animations for Merging When you load an animation into the file, the Track View – Curve Editor will show you the individual components that have keyframe animations. Different animations may affect different components and the interpolation of transformation information between keyframes may get messed up when you merge different animations that affect different components together into one timeline. To fix this, you can stamp a keyframe at the beginning and end of each animation that stores each component’s position at that point. You can also duplicate the first and the last keyframes to make it easier to specify perfectly-repeating animation clips in FlexSim later. Stamp a keyframe by moving the current frame (yellow line) to the keyframe you want. Then select all within the Track View – Curve Editor to select every component. Finally, press the Set Keys button ( ) at the bottom of the main 3ds Max window to store each component’s value as a keyframe. Duplicate keyframes by holding Shift and dragging them. Also holding Ctrl will prevent the new keyframes’ values from shifting by holding the Y-axis still. Merging Multiple Animations into One File To merge multiple animations into one file, you need the export each of the animations as an animation file (XML Animation File (*.xaf)). Then you can import each of these animations into a specific range of keyframes in the timeline. From the .max file without any animations loaded, import one of the fbx animation files. It will automatically apply it to the shape. Click in the Perspective view and press Ctrl-A to select everything. Then prepare the animation using the information above. Lastly, select the menu option Animation > Save Animation to save the animation as an XML Animation File (*.xaf). Reload the .max file without any animations and repeat this process for each animation you want to merge. Reload the .max file without any animations loaded. Click in the Perspective view and press Ctrl-A to select everything. Then select the menu option Animation > Load Animation to load an animation into the timeline. On the right side of the Load XML Animation File dialog, you can specify the keyframe number at which the animation should be inserted. Do not insert the animation at keyframe 0; make sure that keyframe 0 is the bind T-pose. This will be important later if you want to edit the mesh without breaking the skeletal rigging. You also need to be sure to specify Absolute and not Relative. Select the animation you want to import, specify the keyframe you want to insert at, and then press Load Motion to load the keyframes into the animation. Do this for each animation file you want to merge. Use the Track View – Curve Editor to determine the keyframe at which to insert each subsequent animation. Save your .max file with all the merged animations to a new file. Modifying the Mesh without Breaking the Rigging Sometimes you may want to modify the mesh after you have applied animations. I can’t guarantee that this always works perfectly, but there is a way to make some tweaks even after animations have been applied. Before trying to edit a skinned mesh, be sure to save your work so you can revert back to a clean state if something don’t work correctly. First, you need to be sure that the bind pose is frame 0 and that you are set to frame 0 on the timeline. Second, click on a mesh in the scene and click on the Modifiers tab. You will probably see an Editable Poly with a Skin modifier applied. If you click on the Skin modifier and expand the Advanced Parameters, you will see an Always Deform checkbox. If you clear the Always Deform box, you can then click on the Editable Poly and modify it. You then need to recheck the Always Deform box on the Skin modifier once you are done making edits. Ensure that your animations still work after making edits. In my tests with the operator shape above, when I cleared Always Deform, the mesh moved up slightly. This made the Tops mesh not line up correctly with the Body mesh. To fix that, I cleared and immediately checked Always Deform for each mesh (each mesh then moved up the same amount). Then I verified that the animations still worked and that the arms still lined up correctly with the shirt. Reducing Polygon Count 3ds Max has features for reducing the polygon count of an object. You can apply these modifiers without breaking the rigging by following certain steps. You can display the polygon count of your shape by clicking in the Perspective view and pressing the 7 key on the keyboard. As stated above, before modifying the mesh, clear Always Deform on the Skin modifier. To reduce the polygon count, you can use the ProOptimizer modifier. After clearing Always Deform, click on the Editable Poly and then select ProOptimizer from the Modifier List to add it to the stack between the Editable Poly and the Skin modifiers. In the Optimization Options, be sure to check Keep Textures. Then press the Calculate button. Then you can specify a Vertex % and it will start to remove vertices from your mesh. After applying the ProOptimizer modifier, the normals on your mesh might be messed up. You can recalculate the normals based on a crease angle using the Smooth modifier. Apply the Smooth modifier after the ProOptimizer and before the Skin modifier. Check the Auto Smooth box and specify a Threshold. You can use a value of 89 to get a very smooth surface, only applying a crease to angles that are greater than 89 degrees. After making these changes, be sure to recheck Always Deform on the Skin modifier, and test your animation to make sure the rigging is all still working properly.
View full article
This article explores an example model. In this model, items on downstream lanes are able to reserve dogs so that items on upstream lanes cannot use them: reservedogdemo.fsm About Dogs in FlexSim FlexSim simulates dogs on a power-and-free system in an extremely abstract and minimal way. A dog isn't a persistent entity at all. Instead, FlexSim calculates where dogs would be, given the speed, and when they would interact with items. This has a huge performance benefit. But if your logic needs items to interact with specific dogs, this can pose a problem: how do you interact with such an abstract entity? The Catch Condition The only time you can "see" a dog in FlexSim is during the Conveyor's Catch Condition: https://docs.flexsim.com/en/23.1/Reference/PropertiesPanels/ConveyorPanels/ConveyorBehavior/ConveyorBehavior.html#powerAndFree The catch condition fires when a dog passes by an item. If the catch condition returns a 1, the item catches the dog and transfers to the power and free conveyor. If the catch condition returns a 0, the item does not catch the dog. During the catch condition (and only during a catch condition), you can learn many things about a dog: ID - each dog has an ID. The ID is derived from the length of the conveyor and by the distance the conveyor has travelled. If a conveyor is 26 dogs long, the dogs will have IDs 1 through 26. Location - Since an item is trying to catch the given dog, you can derive the dog's location from the items location. Speed - The conveyor that owns the dog is "current" in the catch condition. So you can get the speed of the conveyor at that point. We'll use all these pieces of information in a moment. Creating Tokens to Represent Dogs The first real insight into this model is to make a dummy item. The purpose of this dummy item is to cause the Catch Condition to fire. It never gets on the conveyor. But when the catch condition fires, it makes a token that represents the dog. In this example, that item has a label called "DogFinder" Here is the relevant code from the catch condition: if (item.DogFinder?) { Object pe = current.DogPE; if (pe.stats.state(1).value == PE_STATE_BLOCKED) { return 0; } double dist = current.MaxDogDist; double speed = current.targetSpeed; double duration = dist / speed; if (!item.labels["DistAlong"]) { item.DistAlong = Vec3(item.getLocation(1, 0, 0).x, 0, 0).project(item.up, current).x; } Token token = Token.create(0, current.DogHandler); token.DistAlong = item.DistAlong; token.Conveyor = current.as(treenode); token.Duration = duration; token.DogNum = dogNum; token.Speed = speed; token.DetectTime = Model.time; token.release(1); return 0; } There's a lot going on in this code: This logic only fires for the fake dog finder item If the photo eye just upstream from the dog is blocked, that means there is an item, and this dog is not available. Return here if that's the case. Figure out how long this dog will last (the duration), assuming the conveyor runs at the same speed. In this model, there's a label on the conveyor called MaxDogDist. This is the distance from the PE to the end of the conveyor, minus 2 meters. If this is the first dog ever, calculate the position of the dog, given the position of the item. Store that on a label. Create a token with all kinds of labels. We'll need all this information to estimate where the dog is later, and to estimate how far it is from other items. Pushing Dog Tokens to a List Once the token is made, we need to push it to a list, so that items can pull them. If all your items are the same size, you can just push the token to a list directly. In this model, however, there are larger items that require two dogs. So there's a batch activity first. The dummy item is far enough back that it can detect two dogs and still push the first dog to the list in time for the first lane. So it holds the dog back in a Batch activity until one of two things happen: Either the next dog token appears, completing the batch. Or the max wait timer on the batch expires, indicating that the next dog is not available. Otherwise, there would have been a token. This duration is based on the conveyor's speed and dog interval. If the batch is complete, the first dog in the batch can be marked as a "double", meaning the dog behind it is also available. Once the flow has determined whether the dog is a single or double, it then pushes it to the list. Creating the DistToDog Field When pulling the dog from the list, an item needs to know the position of the dog relative to the item. Is it 0.3 meters upstream? Or is it 2 meters downstream? When we query the set of dogs, we need to filter out downstream dogs and order by upstream dogs, to reserve the closest one: WHERE DistToDog >= 0 ORDER BY DistToDog Here, DistToDog is positive if the dog is upstream, and negative if the dog is downstream. The code for this field is as follows: /**Custom Code*/ Variant value = param(1); Variant puller = param(2); treenode entry = param(3); double pushTime = param(4); double distAlong = Vec3(puller.getLocation(1, 0, 0).x, 0, 0).project(puller.up, value.Conveyor).x; double dt = Model.time - value.DetectTime; double dx = value.Speed * dt; double dogDistAlong = value.DistAlong + dx; return distAlong - dogDistAlong; This code assumes that the item waiting to merge is the puller. So we calculate the item's "dist along" the main conveyor. Then we estimate the location of the dog since the DogFinder item created the token. Then we can find the difference between the item's position and the dog's position. Pulling Dogs from the List Each incoming lane has a Decision Point. The main process flow creates a token when an item arrives there. At a high level, this token just needs to do something simple: pull an available downstream token. If all the items are the same size, it's that simple. But this example is more complicated! If the item is large, we also need to pull the upstream dog behind the dog we got, so that no other item can get that dog. And it gets even more complicated! It can happen that an item acquires the dog after a double dog. In that case, we need to mark the downstream dog as "not double", so that big items won't try to get it. So most of the logic in the ConveyorLogic flow is handling that case. Using the Dog Finally, the item must be assigned to that dog. The ConveyorLogic flow sets the DogNum label on the item. Then, the catch condition checks to see if the dog matches the item's DogNum. Upstream Items The final piece of this model is allowing upstream items to catch a dog on this conveyor. This model adds a special label to those items called "ForceCatch". The catch condition always returns true for those items.
View full article
If you've ever tried to nest groups of objects inside a hierarchy of planes, you may find the drawing of the planes suboptimal and lacking information: Using the container (modified plane) in the attached user library you can represent the container with just an outline. A settings dashboard is installed with the library along with some user commands and global variables. Corner prisms show the nesting layers under the prism: The option 'Use the container center' allows you to use it as either a plane as before or, when unselected, a bordered frame where dropping an object or clicking and dragging within the borders will behave as though you are dropping onto or clicking/dragging the model floor. You can also choose to hide the containers entirely for the cleanest visuals. I hope this will encourage users to use containers more, since when coupled with Templates and Object Process Flows they can increase the scaleability and make your developed assets more manageable. ( In those cases the container becomes the member instance of the process flow or template master and references to its components are made through pointer labels on the container rather than names which you may want to alter for reporting purposes. The pointer labels are updated automatically when creating a new instance of the container.) ContainerMarkers_v1.fsl If you want planes you already have in your model to adopt this style just add this to their draw code: return containerdraw(view,current);
View full article
In the attached model we use a Timetable and two MTBF/MTTR objects to define Schedule Loss, Availability Loss (breakdowns) and an element of Performance loss due to short stops (state Down). The processor sends 'bad' items to port 2 based on the send to percentage which account for QualityLosses. The processor's 'best' processing time per part (5 seconds) is stored as a label, while the processing time itself is a triangular distribution with the minumum as 5 seconds - so it also contributes to performance loss. When the Type of the item changes a setup time occurs which is the final contributor to performance loss. Two state profiles were added to the processor - one to track production time and another for availability. An object process flow on the processor detects production profile state changes (between On and off shift) and regular Flexsim state changes and determines the availability state that should prevail. A user command getOEEstat is used to access the values which it calculates on demand and stores in a label on the processor called statsMap. The syntax for this command is: getOEEstat(myMachine,"OEE") The list of stats: "ScheduleLoss","AvailabilityLoss","PerformanceLoss","QualityLoss","IdealProdTime","AvailabilityRatio", "QualityRatio","PerformanceRatio", "IdealProdTime", "RunTime", "OEE", "TEE". A group was used to indicate which objects have their OEE tracked, and a stats Collector reads the group members and adds rows at reset. Finally Performance Measures were added for the stats for processor 1. Processor_OEE_2.fsm 2023-08-22 Update: Added 'TEE' stat.
View full article
In this example model you'll see two identical elevator setups. However, you will notice that ElevatorBank1 allows the patient to move to the next floor properly, whereas ElevatorBank1_2 will float the patient up the network node instead of using the elevator. There are a few steps you must follow to ensure you will have a properly working elevator in your model. Make sure everything is working the way you intended, without an elevator. Now you can add in the elevator, select it and then check the 'Connect to Path' box as seen below Now ensure that the elevator is connected to the nearest path node and any nodes above it. One thing to note is that after resetting if you click on any of the Path nodes that the elevator is connected to you will see that the On Arrival Trigger now says Send Message to Request Elevator. This is the code that actually calls the elevator when a patient arrives at the node. The elevator automatically adds this to the nodes connected to when resetting, but this trigger option can be added to any node. Another good practice, especially if patients walk by the elevator without always using it, is to make a separate node off on a spur. That way patients aren't triggering the elevator every time they walk by.
View full article
A narrated video demonstration of the FlexSim Healthcare Tutorial described in the FlexSim 2020 User Manual has been released! Here is a link to the written documentation: https://docs.flexsim.com/en/20.0/Tutorials/FlexSimHC/OverviewFlexSimHC/ Here's a link to the video: https://vimeo.com/394012280
View full article
Modeling LWBS patients is tricky business and can cause exception errors under certain circumstances if not done correctly. The problems usually arise due to patients who exit the model early while there are pending requests queued up in the model for future activities on the patient. The attached model demonstrates the current best practice for safely modeling LWBS patients without the possibility of generating unwanted errors. The modeling technique is very simply. Use the On Entry trigger of the waiting room object(s) to send a delayed message to itself in X minutes, where X is a sample time from an "impatience curve" representing the amount of time a typical patient is willing to wait before they strongly consider leaving. In the On Message trigger of the waiting room object(s), I have written a code snippet that you will want to copy and modify to suit your own modeling requirement. The code snippet in the example model checks to make sure the patient is still in the waiting room waiting for an exam room at the end of the "impatience time" when the message trigger fires. Then I roll the dice using a bernoulli distribution to decide whether or not to have the patient actually leave. The code snippet shows a different probability for each of two patient types (PCI's) and then a default probability of 50 percent for anyone else. Not only does this modeling approach avoid undesirable exception errors, but it is also more accurate and definitely more efficient than using the Quick Properties fields in the Patient Condition panel of a waiting room which requires repetitive function calls every so many minutes throughout the model run!
View full article
En este video aprenderán a utilizar el objetivo visual Text para nombrar partes de un modelo de simulación o visualizar estadísticas de forma dinámica. Para más videos tutoriales pueden suscribirse al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips.
View full article
En este vídeo aprenderá a utilizar la propiedad Send To Port de los Objetos 3D para definir el enrutamiento de los Flowitems. Para más videos tutoriales pueden suscribirse al canal de YouTube de FlexSim Andina y acceder a nuestra lista de reproducción de FlexTips.
View full article
Top Contributors