So, I know Danmakufu is kinda bad about arrays. What I'm not sure of is the relative speed of common data and object dictionaries. My guess is it's at least somewhat slower than just using a variable, but I'm unsure how it compares to arrays.
Warning, incoming wall
You're essentially asking if using a hash table to mimic an array is better than using an array for array-like operations. Maybe if there were something actually problematic with the array implementation, but there isn't any huge problem like that. Arrays aren't slow. They're O(1) access as expected. There are still some lingering misconceptions about this because back in 0.12m arrays were not O(1) access.
Meanwhile, hash tables are often internally implemented using arrays to begin with. There would have to be some additional factors in play for a straightforward array implementation to be worse than a hash table mimicking an array. Moreover, this ignores any extra work that has to be done to shape the input. ObjVals and CommonData both map from strings only, so at the very minimum you have to convert the numbers to strings which is at least O(k) in the number of digits k.
CommonData in particular would be a bad idea for namespace reasons alone. This is equivalent to using global variables everywhere and is incredibly bad practice. Additionally, if you use CommonData to work with large pseudo-arrays and try to use the debug window you may be met with screaming lag.
I think it can be assumed that DNH doesn't use dynamically-sized arrays due to having no mechanism to grow/shrink freely. Instead you just concatenate arrays, which (probably) means when you do a concatenation you're allocating memory for the new array then copying them over. If you have any code that grows an array element-by-element this will be very inefficient. However, because you can't instantiate arrays of a given size (which might have to do with them being of variable type and structure until assigned) all you can do is use a better algorithm, like the following (which is O(log n) rather than O(n) in time and wastes O(n) space instead of O(n
2)):
function array(size, value){
let a = [value];
while(length(a) < size * 2){
a = a ~ a;
}
return a[0..size];
}
Also, because you can use this method to instantiate arrays of a certain size, but cannot do so with a hash table, creating a hash table mimicking an array of a fixed size will be significantly slower even ignoring the overhead explained above.
What hash tables do have over arrays in this sense is their variable size, but hash tables still have to resize internally when filled up a certain amount, and you can just as well do this with arrays in order to create dynamic arrays, but I'm not going to get into that.
Anyways, the majority of latency people tend to find with arrays in DNH is from them trying to create and manipulate arrays as though they are dynamic when they are not. If you ignore the fact that you can't simply instantiate an array with a given size they are just fine.
Now for some problems with your tests. First is that you have no proper control tests. You don't factor in what could be overhead and what is inherent to the method used. Not only does this mean you can't tell what is/isn't overhead, but also you can't properly measure and compare each method.
You use FPS to measure. While I've done this previously, this is a bad measure because it stays at 60 until it slows down and is not necessarily linear decreasing past that. It's not a very reliable way to compare, especially for reasons I'll give later.
You use only arrays of length 10. Although theoretically, if you assume all of the test method accesses are O(1) but with different constant factors, the size of the structures shouldn't be important. However, not only should you not assume this to begin with, but as explained above, the pseudo-array hash table access will take O(k) time for the digit length k. So with a pseudo-array of length 20000, accessing element 10000 will take slightly longer than element 0 because of the string conversion.
You use ObjVal on the empty string. I'm not sure why you did this but I really hope you don't use that in any other code. Strings are not objects that you can use these functions on. While it might
appear that it's working successfully, what is happening is that putting a string in place of the object ID ends up falling back to the value 0 instead. So you are actually manipulating the object values of object ID 0. Check this example:
let a = "a";
Obj_SetValue(a, "key", "a:val");
let b = "b";
Obj_SetValue(b, "key", "b:val");
Obj_GetValueD(a, "key", "a:null"); // => "b:val"
Obj_GetValueD(b, "key", "b:null"); // => "b:val"
Obj_GetValueD(0, "key", "0:null"); // => "b:val"
Lastly you're doing a hell of a lot of branching unnecessarily by using Arr_Set/Get for every single iteration. This is going to inflate times and the differences between each method will become muddy,
especially considering you don't have controls that would expose the overhead. This is further compounded by all the extra stuff you're doing for each iteration: access the variables arr and i, create a variable v and set it, access all three of those again, add 1, (do a set op), access i and size, add 1, do a modulo operation, and set i again. This is a lot of overhead that you're introducing for each iteration that will likely dominate runtime.
So basically this got me invested enough, when I've done these tests similar to this in the past, that I went back and refined them even more.
Here is the code:
https://gist.github.com/drakeirving/a99d768beca4c69d7c44050620777b2fHere are some results:
https://drive.google.com/file/d/1QmxcjFVENOtCjK8cxksA-N1K0WTjZvVr/view
Notes:
As I comment in the code, for a given number of iterations, putting in more iterations per frame asymptotically speeds up the process. So what you were doing, where you have a certain number of iterations per frame, each frame you wait to update the screen increases runtime. With the same number of iterations, as you perform more iterations per frame while decreasing the number of frames to run, it gets faster and faster. So, the optimal way to test is to not yield whatsoever and only update the screen when the run is done.
DNH gives you the GetStageTime function to work in milliseconds. You can easily just time using this and it's much more accurate than checking FPS values; plus it is cumulative so it represents the whole test while tracking FPS has to be sampled at certain points in time. Doing it this way also allows you to test without needing to update the screen.
AddScore() has less overhead than incrementing a variable, so I use this. It's the operation with the least overhead I bothered testing that demonstrably does something.
Using CommonData has more overhead than AreaCommonData because of the extra string concatenation, but as shown by the controls, the actual access is the same. Meanwhile, ObjVals are inherently faster, but still far slower than array access even when ignoring the overhead.
tl;dr use arrays when you want to use arrays. Use efficient methods if you need to make arrays of a certain size rather than just concatenating each element.
And use ObjVals over CommonData when you need to use dictionaries unless you actually need to use the global properties of CommonData.