Create an account

Very important

  • To access the important data of the forums, you must be active in each forum and especially in the leaks and database leaks section, send data and after sending the data and activity, data and important content will be opened and visible for you.
  • You will only see chat messages from people who are at or below your level.
  • More than 500,000 database leaks and millions of account leaks are waiting for you, so access and view with more activity.
  • Many important data are inactive and inaccessible for you, so open them with activity. (This will be done automatically)


Thread Rating:
  • 322 Vote(s) - 3.66 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Why are memcpy() and memmove() faster than pointer increments?

#1
I am copying N bytes from `pSrc` to `pDest`. This can be done in a single loop:

for (int i = 0; i < N; i++)
*pDest++ = *pSrc++

Why is this slower than `memcpy` or `memmove`? What tricks do they use to speed it up?
Reply

#2
`memcpy` can copy more than one byte at once depending on the computer's architecture. Most modern computers can work with 32 bits or more in a single processor instruction.

From [one example implementation][1]:

<pre>
00026 * For speedy copying, optimize the common case where both pointers
00027 * and the length are word-aligned, and copy word-at-a-time instead
00028 * of byte-at-a-time. Otherwise, copy by bytes.
</pre>

[1]:

[To see links please register here]

Reply

#3
Like others say memcpy copies larger than 1-byte chunks. Copying in word sized chunks is much faster. However, most implementations take it a step further and run several MOV (word) instructions before looping. The advantage to copying in say, 8 word blocks per loop is that the loop itself is costly. This technique reduces the number of conditional branches by a factor of 8, optimizing the copy for giant blocks.
Reply

#4
Memory copy routines can be far more complicated and faster than a simple memory copy via pointers such as:

void simple_memory_copy(void* dst, void* src, unsigned int bytes)
{
unsigned char* b_dst = (unsigned char*)dst;
unsigned char* b_src = (unsigned char*)src;
for (int i = 0; i < bytes; ++i)
*b_dst++ = *b_src++;
}

**Improvements**

The first improvement one can make is to align one of the pointers on a word boundary (by word I mean native integer size, usually 32 bits/4 bytes, but can be 64 bits/8 bytes on newer architectures) and use word sized move/copy instructions. This requires using a byte to byte copy until a pointer is aligned.

void aligned_memory_copy(void* dst, void* src, unsigned int bytes)
{
unsigned char* b_dst = (unsigned char*)dst;
unsigned char* b_src = (unsigned char*)src;

// Copy bytes to align source pointer
while ((b_src & 0x3) != 0)
{
*b_dst++ = *b_src++;
bytes--;
}

unsigned int* w_dst = (unsigned int*)b_dst;
unsigned int* w_src = (unsigned int*)b_src;
while (bytes >= 4)
{
*w_dst++ = *w_src++;
bytes -= 4;
}

// Copy trailing bytes
if (bytes > 0)
{
b_dst = (unsigned char*)w_dst;
b_src = (unsigned char*)w_src;
while (bytes > 0)
{
*b_dst++ = *b_src++;
bytes--;
}
}
}

Different architectures will perform differently based on if the source or the destination pointer is appropriately aligned. For instance on an XScale processor I got better performance by aligning the destination pointer rather than the source pointer.

To further improve performance some loop unrolling can be done, so that more of the processor's registers are loaded with data and that means the load/store instructions can be interleaved and have their latency hidden by additional instructions (such as loop counting etc). The benefit this brings varies quite a bit by the processor, since load/store instruction latencies can be quite different.

At this stage the code ends up being written in Assembly rather than C (or C++) since you need to manually place the load and store instructions to get maximum benefit of latency hiding and throughput.

Generally a whole cache line of data should be copied in one iteration of the unrolled loop.

Which brings me to the next improvement, adding pre-fetching. These are special instructions that tell the processor's cache system to load specific parts of memory into its cache. Since there is a delay between issuing the instruction and having the cache line filled, the instructions need to be placed in such a way so that the data is available when just as it is to be copied, and no sooner/later.

This means putting prefetch instructions at the start of the function as well as inside the main copy loop. With the prefetch instructions in the middle of the copy loop fetching data that will be copied in several iterations time.

I can't remember, but it may also be beneficial to prefetch the destination addresses as well as the source ones.

**Factors**

The main factors that affect how fast memory can be copied are:

* The latency between the processor, its caches, and main memory.
* The size and structure of the processor's cache lines.
* The processor's memory move/copy instructions (latency, throughput, register size, etc).

So if you want to write an efficient and fast memory cope routine you'll need to know quite a lot about the processor and architecture you are writing for. Suffice to say, unless you're writing on some embedded platform it would be much easier to just use the built in memory copy routines.
Reply

#5
I don't know whether it is actually used in any real-world implementations of `memcpy`, but I think [Duff's Device][1] deserves a mention here.

From [Wikipedia][1]:

send(to, from, count)
register short *to, *from;
register count;
{
register n = (count + 7) / 8;
switch(count % 8) {
case 0: do { *to = *from++;
case 7: *to = *from++;
case 6: *to = *from++;
case 5: *to = *from++;
case 4: *to = *from++;
case 3: *to = *from++;
case 2: *to = *from++;
case 1: *to = *from++;
} while(--n > 0);
}
}

Note that the above isn't a `memcpy` since it deliberately doesn't increment the `to` pointer. It implements a slightly different operation: the writing into a memory-mapped register. See the Wikipedia article for details.


[1]:

[To see links please register here]

Reply

#6
Because like many library routines it has been optimized for the architecture you are running on. Others have posted various techniques which can be used.

Given the choice, use library routines rather than roll your own. This is a variation on DRY that I call DRO (Don't Repeat Others). Also, library routines are less likely be wrong than your own implementation.

I have seen memory access checkers complain about out of bounds reads on memory or string buffers which were not a multiple of the word size. This is a result of the optimization being used.
Reply



Forum Jump:


Users browsing this thread:
1 Guest(s)

©0Day  2016 - 2023 | All Rights Reserved.  Made with    for the community. Connected through