Community
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

normalise n dimensional data

normalise n dimensional data

make the normalize node have n inputs and n outputs.

 

At first we were exited to find the normalize 2024 node, but then we saw it is just a vector normalize (3 inputs). We already had quad normalize and vector product (normalize checkbox), so this node brings nothing new to the table. We need a node for large amounts of inputs like weights or more complex situations (ie. rigging).  

Please expand or make a new node (ie. n-normalize) having many in- and outputs map to each other. This should be across the board for all math nodes dimensionality should not be a limiting factor for basic math nodes.


tristan_0-1686816791524.png

 

please include a switch for various types like i.e. : 

  1. Min-Max Normalization (Rescaling):
    normalized_value = (value - min_value) / (max_value - min_value)
    Goal: Rescale the values within a specific range, typically between 0 and 1.
    Explanation: This formula linearly transforms the values based on the minimum and maximum values in the dataset. It ensures that the minimum value is mapped to 0 and the maximum value is mapped to 1, with other values scaled proportionally in between.

  2. Z-Score Normalization (Standardization):
    normalized_value = (value - mean) / standard_deviation
    Goal: Standardize the values to have a mean of 0 and a standard deviation of 1.
    Explanation: This formula standardizes the values by subtracting the mean and dividing by the standard deviation. It helps in comparing values across different distributions and identifies how many standard deviations a value is away from the mean.


  3. Decimal Scaling Normalization:
    normalized_value = value / 10^d (where d is the number of decimal places to shift)
    Goal: Shift the decimal places of the values to achieve normalization.
    Explanation: This formula divides the values by a power of 10, shifting the decimal places. The number of decimal places to shift depends on the maximum value in the dataset. It simplifies calculations and maintains the relative order of values.


  4. Log Transformation:
    normalized_value = log(value)
    Goal: Transform the values using logarithmic scaling.
    Explanation: This formula applies a logarithmic function to the values. It compresses the range of large values, emphasizing smaller differences. It is useful when dealing with skewed data or when the ratio between values matters more than the absolute difference.

  5. Sigmoidal Transformation:
    normalized_value = 1 / (1 + exp(-value))
    Goal: Map the values to a bounded range between 0 and 1 using a sigmoid function.
    Explanation: This formula uses the sigmoid function to map values to the interval (0, 1). It compresses extreme values towards the upper and lower bounds while preserving the order of values. It is commonly used in machine learning for classification tasks.

  6. Softmax Transformation:
    normalized_value = exp(value) / sum(exp(all_values))
    Goal: Normalize a set of values into a probability distribution.
    Explanation: This formula exponentiates each value and divides it by the sum of exponentiated values across the dataset. It ensures that the resulting values sum up to 1, making them interpretable as probabilities. It is commonly used in multi-class classification problems.

  7. Unit Vector Normalization (Vector Normalization or Euclidean Normalization):
    normalized_value = value / sqrt(sum(value^2))
    Goal: Scale vectors to have a length (magnitude) of 1.
    Explanation: This formula divides each value by the Euclidean norm (magnitude) of the vector. It normalizes the vector to have a constant length, making it independent of the scale. It is useful in machine learning algorithms that rely on distance calculations.

  8. Pareto Scaling (Square Root Transformation or Square Root Scaling):
    normalized_value = sqrt(value)
    Goal: Normalize values by taking the square root.
    Explanation: This formula applies the square root function to the values. It reduces the impact of large values while preserving the relative order of values. It is useful when dealing with data that has a wide range of magnitudes.

 

  1. Median-MAD Normalization (Median Absolute Deviation):
    normalized_value = (value - median) / median_absolute_deviation
    Goal: Normalize values by using the median and median absolute deviation.
    Explanation: This formula subtracts the median from each value and divides it by the median absolute deviation (MAD). It provides robust normalization by considering the central tendency and spread of the data. It is helpful when dealing with outliers or non-normally distributed data.



Thank you.
- Tristan


4 Comments
tristan
Contributor

@Anonymous you get this to the right people 🙂 




brentmc
Autodesk

Thanks Tristan.

Just curious why you wouldn't consider Bifrost for applying more complex math operations to large amounts of data?

tristan
Contributor

Hi Brent,

For character rigging bifrost is the equivalent of an island in an ocean far away from the main land. We need proper api's for automating though script and have things done without much manual intervening. This has always been the core in our task. Biforst is a tool directed at simulations and admitted looks very powerful and has grown up to be a shoe that could fit all but still doesn't. As long as autodesk doesn't commit to one or the other as the main environment to work in for rigging and animation, i'm afraid we are stuck working wherever those live the most, thus the legacy nodes. I would love to go all out in bifrost and ideally Autodesk grows a pair to close shop on the old nodes but it has to first integrate everything regarding rigging and animation into bifrost... We mostly work with animation nodes, skinning, HIK, IK's, constraints, joints, basically every operation that lives in the standard environment. Going in and out of bifrost nodes (is there even a python API?) is just too cumbersome to get something basic done and adds more complexity to our task, same for Mash expressions, and imo. even mel expression nodes. most of us would rather go C++ for custom atomic math nodes.

Unfortunately we at our firm are conditioned to work native for our clients and the new efforts for basic math nodes introduced in 2024 show that there still is a large need for it, just looking at the reactions i saw in the community. 

-Tristan

brentmc
Autodesk

Thanks Tristan!

I'll take this feedback back to the team.

Can't find what you're looking for? Ask the community or share your knowledge.

Submit Idea  

Autodesk Design & Make Report