It acts like a stack when appending bytes on the queue. However, usually when using a static array to implement a queue you would do this:
Code: Select all
for(int i = 1; i < x.Length; i++)
{
x[i-1] = x[i];
}
x[x.Length-1] = 0;
This means shuffling along each byte not in the index (position 0) to the index 1 to the left. This is slow, and is the usual method for reading off the front of a queue implemented with a static array.
Mine, however, when reading, simply increments the pointer to the byte at the front of the queue, so that it points to the byte second in line. Think of it like the front desk moving towards the queue, rather than the people not at the front of the queue moving towards the desk when the person at the front leaves.
This means that sometimes, the byte at the back of the queue is at the beginning of the array. For example, if we did:
x = new byte[256];
and filled it up with 0xA2 values. begin points to x[0], end points to x[255].
Now we read from the queue. We get the value pointed to by begin (x[0]), then increment begin. Now the following is true:
begin points to x[1], end points to x[255]. Now when we add to the queue, the following is true:
begin points to x[1], end points to
x[0]
I'm pretty sure someone has already implemented queues this way. If they haven't, I'll call it a "circle queue".
Another appropriate name would be a "bulldozer queue", since the desk moves to the people in line, rather than the people in line moving forwards to the desk.
This "desk" happens to literally be a bulldozer. It could also be a mammoth. Depends on what era we're in.
If begin is equal to (end+1)%x.Length, and no byte has been written to where end points to, that means the queue is full since there is no more free space.
If begin is equal to end, and no byte has been written, or if a byte that is the only byte in the queue is read, the queue is marked as empty.
In this way, we don't actually delete bytes (i.e. write zero's over them). We simply disregard that they are there. Hence "bulldozer queue", since we are stepping over and crushing their pretty faces when we write over them.
This is obviously faster than using the method as shown by the code snippet above in this post. It is not much faster (but still slightly faster) than using a linked list, but has the advantage of not using nearly as much memory. In other words, my method is awesome.
It still has the detriment of not being easily expandable, however, without incurring a speed penalty. What if we want the max size of the queue to be bigger, while still retaining what is in the queue? If we expand the array, suddenly we'd have to move stuff around anyway (once for expanding the array, and if end is lower than begin, one for moving everything from the array position at begin, to the literal end of the actual array to the right
at least once (if we were expanding the max size by 1, we'd only move them all once, but if we were to expand the array by about, say 60, so as to be able to fit 60 more bytes, we'd have to move so much more...)), which would incur a further penalty, increasing depending on the literal size of the array. Then of course you would temporarily have to create a copy of the array, so as to expand it without losing anything. This copy of the array can be deleted after it is no longer needed, thus freeing up memory, but it's still using more memory needlessly while it is there, when it could be put to much better use. This isn't a problem if we don't expand often. What if we are constantly expanding/contracting the array? However, no good programmer allows their arrays/lists to be expanded indefinitely (hence malloc() if you are using C/C++), so this point is moot...
...if you only want a fixed maximum size for your queue (for example, you want the queue to be no larger than 1024 bytes), my method is not only fast, but uses a minimum amount of memory.
Of course, if the maximum size of the queue is 1024, you create a 1024 byte array. What if only 1 byte is currently stored in the array? You are wasting 1023 bytes. I could set a max size the array can be expanded to. That way we can use a 23 byte array to store 23 bytes. We expand it to 24 when adding a new byte. And so on. This would use even less memory than my currently method. However, it would be incredibly slow, especially if we are performing read/write to the queue very often.
So I've kind of made the best trade off I can, gaining a middle ground between memory usage and CPU time.
I hope that this rather long winded explanation explains my thinking.