For say I have an Array and Everynow and then the last subscript of the array is empty. If I wanted to remove remove the last subscript of the array and keep the rest of the values in the array unchanged would this work? or is there a better way?
If MyArray(Ubound(Myarray)) = "" then Redim Preserve MyArray(Ubound(MyArray)-1)
Edit, Used word Subsript instead of value.
If you are ever assigning any string in visual basic to empty, then I recommend using vbNullString.
If MyArray(Ubound(MyArray)) = vbNullString then ReDim Preserve MyArray(Ubound(MyArray)-1)
Quote from: Imperceptus on February 13, 2004, 06:19 PM
For say I have an Array and Everynow and then the last subscript of the array is empty. If I wanted to remove remove the last subscript of the array and keep the rest of the values in the array unchanged would this work? or is there a better way?
If MyArray(Ubound(Myarray)) = "" then Redim Preserve MyArray(Ubound(MyArray)-1)
Edit, Used word Subsript instead of value.
'Member' is more technically correct. :)
Yes -- that should work. And, take TheMinistered's suggestion -- "" is still a string in memory, it's just empty. vbNullString is actually null.
Thanks to the Both of you, most appreciated.
If your array size changes often.. I recommend you don't redim for every item added/removed..
but rather.. start it off with .. lets say 10 Extra space and increment/decrement as needed without overdoing it.
This will help your speed if you are working with a constantly changing array.
Quote from: o.OV on February 13, 2004, 09:14 PM
If your array size changes often.. I recommend you don't redim for every item added/removed..
but rather.. start it off with .. lets say 10 Extra space and increment/decrement as needed without overdoing it.
This will help your speed if you are working with a constantly changing array.
In the general case it's most efficient to double the size.
Quote from: Skywing on February 14, 2004, 11:17 AM
In the general case it's most efficient to double the size.
Why?
Quote from: Skywing on February 14, 2004, 11:17 AM
Quote from: o.OV on February 13, 2004, 09:14 PM
If your array size changes often.. I recommend you don't redim for every item added/removed..
but rather.. start it off with .. lets say 10 Extra space and increment/decrement as needed without overdoing it.
This will help your speed if you are working with a constantly changing array.
In the general case it's most efficient to double the size.
But he sounded memory conscious and seemed he wanted to conserve his memory.
I myself prefer to use memory generously but double the size? :o
I guess it depends on the operation.
Yes. It can be proven that doubling the size is the most efficient way to do it.
Quote from: Skywing on February 14, 2004, 02:07 PM
Yes. It can be proven that doubling the size is the most efficient way to do it.
That proof is what I'm after. It seems unobvious to me why that would be the case, and so I'd like to learn. Of course general case might perhaps imply something. Maybe it implies that the array will have to be copied? Maybe it implies that I don't know that the array will never have to grow more than 10% large than the current size? It'd be nice to know what kind of assumptions you're making.
At least, that was advertised as one of the things you'd learn in the algorithms course I'm taking. However, I'm not to that point yet, so I can't give you an answer just yet.
None of my books (at least the parts I've read) suggests that. Someone who has Knuth algorithms books might look for array growth optimizations and array-related algorithms.
Quote from: Skywing on February 15, 2004, 12:05 PM
At least, that was advertised as one of the things you'd learn in the algorithms course I'm taking. However, I'm not to that point yet, so I can't give you an answer just yet.
If low memory usage is a higher priority than copying speed, I'll doubt that. And especially consider the possibility of resizing the array in-place, like if you're allocating on a page level so you can just commit more pages, or move existing pages to another area in your virtual memory space without having to copy the actual data.
No, I'd expect it's optimized for reducing the number of memcpys rather than reducing memory usage.
I think this algorithm is designed to be applicable for higher level situations where you won't necessarily be controlling where memory is allocated.